WO2022247363A1 - 内容处理方法、装置、系统、存储介质和电子设备 - Google Patents

内容处理方法、装置、系统、存储介质和电子设备 Download PDF

Info

Publication number
WO2022247363A1
WO2022247363A1 PCT/CN2022/077137 CN2022077137W WO2022247363A1 WO 2022247363 A1 WO2022247363 A1 WO 2022247363A1 CN 2022077137 W CN2022077137 W CN 2022077137W WO 2022247363 A1 WO2022247363 A1 WO 2022247363A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
smart glasses
task
content processing
processing method
Prior art date
Application number
PCT/CN2022/077137
Other languages
English (en)
French (fr)
Inventor
林鼎豪
陈碧莹
刘章奇
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2022247363A1 publication Critical patent/WO2022247363A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/60Subscription-based services using application servers or record carriers, e.g. SIM application toolkits
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/162Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/70Services for machine-to-machine communication [M2M] or machine type communication [MTC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication

Definitions

  • the present disclosure relates to the technical field of computer control, and in particular, to a content processing method, a content processing device, a content processing system, a computer-readable storage medium, and electronic equipment.
  • smart glasses can provide users with more and more functions through more and more functions. Convenience in work and life.
  • Smart glasses can present some content to users by interacting with other devices.
  • users need to perform configuration operations for a long time on the device side.
  • the process is complicated and the learning cost is high, which is not conducive to the promotion of smart glasses functions.
  • a content processing method applied to a content sending device including: determining the target content to be sent; when smart glasses are determined based on the first communication method, through the second communication method Send the target content to the smart glasses so that the smart glasses can play the target content.
  • a content processing apparatus applied to a content sending device, including: a content determining module configured to determine target content to be sent; a content sending module configured to If the smart glasses are determined by the method, the target content is sent to the smart glasses through the second communication method, so that the smart glasses can play the target content.
  • a content processing system including: a content sending device configured to determine the target content to be sent, and when the smart glasses are determined based on the first communication method, through the second communication The target content is sent to the smart glasses in a manner; the smart glasses are configured to play the target content.
  • a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the above content processing method is realized.
  • an electronic device including a processor; a memory configured to store one or more programs, and when the one or more programs are executed by the processor, the processor implements the above content processing method.
  • FIG. 1 shows a schematic diagram of an exemplary architecture of a content processing system according to an embodiment of the present disclosure
  • FIG. 2 shows a schematic structural diagram of an electronic device suitable for implementing an embodiment of the present disclosure
  • FIG. 3 schematically shows a flowchart of a content processing method according to an exemplary embodiment of the present disclosure
  • FIG. 4 shows a schematic diagram of types of target content in an embodiment of the present disclosure
  • Fig. 5 schematically shows a flow chart of determining target content according to an embodiment of the present disclosure
  • Fig. 6 shows a schematic diagram of the process of determining target content according to task information in the present disclosure
  • FIG. 7 shows a structural diagram of smart glasses according to an embodiment of the present disclosure
  • Fig. 8 shows a schematic diagram of the interaction process of the content processing solution of the embodiment of the present disclosure
  • FIG. 9 shows a schematic diagram of a content processing solution taking a navigation scene as an example
  • Fig. 10 shows a schematic diagram of a content processing solution taking a sports scene as an example
  • FIG. 11 schematically shows a block diagram of a content processing device according to an exemplary embodiment of the present disclosure
  • Fig. 12 schematically shows a block diagram of a content processing device according to another exemplary embodiment of the present disclosure
  • Fig. 13 schematically shows a block diagram of a content processing apparatus according to yet another exemplary embodiment of the present disclosure.
  • Example embodiments will now be described more fully with reference to the accompanying drawings.
  • Example embodiments may, however, be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the concept of example embodiments to those skilled in the art.
  • the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
  • numerous specific details are provided in order to give a thorough understanding of embodiments of the present disclosure.
  • those skilled in the art will appreciate that the technical solutions of the present disclosure may be practiced without one or more of the specific details being omitted, or other methods, components, devices, steps, etc. may be adopted.
  • well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
  • Fig. 1 shows a schematic diagram of an exemplary architecture of a content processing system according to an embodiment of the present disclosure.
  • the content processing system may include a content sending device 11 and smart glasses 12 .
  • the content transmission device 11 is a device for transmitting content to the smart glasses 12 .
  • the content mentioned in the embodiments of the present disclosure may refer to media content, including but not limited to image, audio and so on.
  • the content sending device 11 may be any device capable of communicating with the smart glasses 12, including but not limited to smart phones, tablet computers, smart watches, and the like.
  • the content sending device 11 can be used to determine the target content to be sent, and when the smart glasses 12 are determined based on the first communication method, send the target content to the smart glasses 12 through the second communication method .
  • the second communication method has a longer transmission distance than the first communication method.
  • the target content can be played.
  • the smart glasses 12 may include a content receiving unit 121 , a light emitting unit 123 and an image display unit 125 .
  • the content receiving unit 121 may be configured to receive the image content sent by the content sending device 11 based on the second communication method.
  • the light emitting unit 123 can be used to play the image content.
  • the image display unit 125 can be used to display the image content played by the light emitting unit 123 .
  • the light emitting unit 123 may include an optical machine equipped on the smart glasses 12
  • the image display unit 123 may include lenses of the smart glasses 12 .
  • the smart glasses 12 may further include an audio playback unit 127, and the audio playback unit 127 is configured to play the audio content.
  • the smart glasses 12 can control them to be played simultaneously or separately.
  • the target content sent by the content sending device 11 may be converted content based on the task information of the currently running task.
  • the smart glasses 12 may further include a task control unit 129 .
  • the task control unit 129 may be configured to respond to a user's task control operation, generate a task control instruction, and send the task control instruction to the content sending device 11 .
  • the content sending device 11 may control the currently running task in response to the task control instruction, for example, suspend the task, start the task, terminate the task, and so on.
  • the content processing method in the exemplary embodiment of the present disclosure is generally executed by the content sending device 11 , and accordingly, the content processing apparatus described below is generally configured in the content sending device 11 .
  • FIG. 2 shows a schematic diagram of an electronic device suitable for implementing an exemplary embodiment of the present disclosure.
  • the content transmission device of the exemplary embodiment of the present disclosure may be configured in the form of FIG. 2 .
  • the electronic device shown in FIG. 2 is only an example, and should not limit the functions and application scope of the embodiments of the present disclosure.
  • the electronic device of the present disclosure includes at least a processor and a memory, the memory is configured to store one or more programs, and when the one or more programs are executed by the processor, the processor can implement the content processing method of the exemplary embodiment of the present disclosure .
  • the electronic device 200 may include: a processor 210, an internal memory 221, an external memory interface 222, a Universal Serial Bus (Universal Serial Bus, USB) interface 230, a charging management module 240, and a power management module 241, battery 242, antenna 1, antenna 2, mobile communication module 250, wireless communication module 260, audio module 270, speaker 271, receiver 272, microphone 273, earphone interface 274, sensor module 280, display screen 290, camera module 291 , an indicator 292, a motor 293, a button 294, and a Subscriber Identification Module (Subscriber Identification Module, SIM) card interface 295, etc.
  • a processor 210 an internal memory 221, an external memory interface 222, a Universal Serial Bus (Universal Serial Bus, USB) interface 230, a charging management module 240, and a power management module 241, battery 242, antenna 1, antenna 2, mobile communication module 250, wireless communication module 260, audio module 270, speaker 271, receiver 272, microphone 273, earphone interface 274, sensor module 280, display screen 290, camera module 291
  • the sensor module 280 may include a depth sensor, a pressure sensor, a gyro sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, and a bone conduction sensor.
  • the structure illustrated in the embodiment of the present disclosure does not constitute a specific limitation on the electronic device 200 .
  • the electronic device 200 may include more or fewer components than shown in the illustration, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components may be realized in hardware, software, or a combination of software and hardware.
  • the processor 210 may include one or more processing units, for example: the processor 210 may include an application processor (Application Processor, AP), a modem processor, a graphics processor (Graphics Processing Unit, GPU), an image signal processor (Image Signal Processor, ISP), controller, video codec, digital signal processor (Digital Signal Processor, DSP), baseband processor and/or neural network processor (Neural-network Processing Unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • a memory may also be provided in the processor 210 configured to store instructions and data.
  • the wireless communication function of the electronic device 200 may be realized by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like.
  • the mobile communication module 250 can provide wireless communication solutions including 2G/3G/4G/5G applied on the electronic device 200 .
  • the wireless communication module 260 can provide wireless local area network (Wireless Local Area Networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (Bluetooth, BT), global navigation satellite System (Global Navigation Satellite System, GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Ultra Wide Band (UWB), Infrared (IR) and other wireless communication solutions.
  • WLAN wireless Local Area Networks
  • WLAN wireless Local Area Networks
  • WLAN wireless Local Area Networks
  • WLAN wireless Local Area Networks
  • WLAN wireless Local Area Networks
  • WLAN wireless local area network
  • WLAN such as wireless fidelity (Wireless Fidelity, Wi-Fi) network
  • Bluetooth Bluetooth, BT
  • Global Navigation Satellite System, GNSS Global Navigation Satellite System
  • FM Frequency Modulation
  • NFC Near Field Communication
  • UWB Ultra Wide Band
  • IR Infrared
  • the electronic device 200 realizes the display function through the GPU, the display screen 290 and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 290 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 210 may include one or more GPUs that execute program instructions to generate or change display information.
  • the electronic device 200 can realize the shooting function through the ISP, the camera module 291 , the video codec, the GPU, the display screen 290 and the application processor.
  • the electronic device 200 may include 1 or N camera modules 291 , where N is a positive integer greater than 1. If the electronic device 200 includes N cameras, one of the N cameras is the main camera.
  • the internal memory 221 may be used to store computer-executable program codes including instructions.
  • the internal memory 221 may include an area for storing programs and an area for storing data.
  • the external memory interface 222 can be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 200.
  • the present disclosure also provides a computer-readable storage medium.
  • the computer-readable storage medium may be included in the electronic device described in the above embodiments, or may exist independently without being assembled into the electronic device.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • the computer-readable storage medium may send, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wires, optical cables, RF, etc., or any suitable combination of the foregoing.
  • the computer-readable storage medium bears one or more programs, and when the above one or more programs are executed by an electronic device, the electronic device is made to implement the methods described in the following embodiments.
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that includes one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block in the block diagrams or flowchart illustrations, and combinations of blocks in the block diagrams or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified function or operation, or can be implemented by a A combination of dedicated hardware and computer instructions.
  • the units described in the embodiments of the present disclosure may be implemented by software or by hardware, and the described units may also be set in a processor. Wherein, the names of these units do not constitute a limitation of the unit itself under certain circumstances.
  • Fig. 3 schematically shows a flowchart of a content processing method in an exemplary embodiment of the present disclosure.
  • the content processing method may include the following steps:
  • target content may refer to media content, including but not limited to images, audio, and the like.
  • the type of target content may include pictures, videos, texts, symbols, audio and so on.
  • pictures can include static images and dynamic images; videos can be understood as a series of continuous images; text can be presented based on images, and text can also include numbers; symbols refer to the symbols that come with the computer system. It can also be a symbol drawn by the user.
  • the content sending device may store target content.
  • the content sending device may select the target content to be sent from the stored contents in response to the user's content selection operation. For example, the user selects one or more photos from an album as target content. For another example, the user selects one or more pieces of music from the stored music files as the target content.
  • the content sending device may obtain content from other devices or servers, and use the obtained content as target content to be sent.
  • the target content is converted based on task information of a currently running task.
  • FIG. 5 shows the process of determining target content in this case.
  • the content sending device may determine task information of a currently running task.
  • an application program for transmitting content is installed on the content sending device, that is, an application program associated with smart glasses is installed on the content sending device in the disclosed solution. task to determine the corresponding task information.
  • the application can only grab the process tasks of other applications that have obtained permission. For example, when installing the application of smart glasses or modifying the configuration later, you can build a white list of applications, such as popping up the application list to Allows the user to opt out of whitelisted applications. For applications belonging to the white list, the applications of the smart glasses can obtain their process tasks.
  • the white list of tasks can also be configured, that is to say, not all tasks of the white list application can be captured by the smart glasses application, but only the tasks in the task white list can be captured by the smart glasses application Pick.
  • the present disclosure does not limit the whitelist of applications and the whitelist of tasks.
  • the determined currently running task may be a task currently displayed on the interface of the content sending device, or may be a task not displayed on the interface but running in the background.
  • These tasks are, for example, navigation APP tasks, sports and fitness APP tasks, payment APP tasks, etc.
  • the task may be started in response to the user's task setting operation on the application interface.
  • the task setting operation may be an operation of setting a destination and triggering the start of navigation.
  • step S504 the content sending device may convert task information into target content.
  • the task information can be directly converted into target content.
  • the content configuration style of the task can be obtained, that is, the appearance of the converted content, including but not limited to font size, font color, image size, image color, image style, content presentation position, content presentation transparency, etc. Then, combine the task's current data with content configuration styles to generate targeted content. Still taking the countdown as an example, combine the remaining time with the corresponding content configuration style to generate the target content.
  • the information used to generate the target content is not all of the task information, and further information extraction operations are required.
  • feature information may be extracted from task information, wherein the feature information includes at least one feature item and feature data of the feature item.
  • the task information may include current time, destination information, information on the entire navigation route, information on the remaining time of navigation, and so on.
  • the generated target content may only need the information of the current travel route, and does not pay attention to other information.
  • the current travel route can be used as the feature item, and the specific data of the current travel route can be used as the feature data.
  • the feature data can be For example, it includes the name of the road you are currently on, how much distance is left to turn, and so on.
  • the content sending device may determine the content configuration style corresponding to the feature item.
  • a mapping relationship between a feature item and a content configuration style may be constructed in advance, and the content configuration style corresponding to the feature item may be determined according to the pre-built mapping relationship.
  • the content configuration style includes but not limited to font size, font color, image size, image color, image style, content presentation position, content presentation transparency and so on.
  • the target content is generated by combining the feature data of the feature item and the content configuration style corresponding to the feature item.
  • the content sending device may use the generated feature data to update the target content in real time, so as to make the target content consistent with the process of task execution.
  • the first communication method and the second communication method are different communication methods.
  • the first communication method may be NFC or UWB method
  • the second communication method may be Bluetooth or WiFi P2P method.
  • the transmission distance of the first communication method is smaller than the transmission distance of the second communication method.
  • the smart glasses may be determined through the first communication method in response to an operation of the content sending device touching or approaching the smart glasses.
  • the content sending device may determine the smart glasses in response to the user's touch operation on the content sending device and the smart glasses. Specifically, the content sending device can acquire the device information of the smart glasses through NFC touch, and the smart glasses can be determined through the device information.
  • the result of the smart glasses determined by the content sending device can be used as a trigger condition for sending the target content, that is, when the smart glasses are determined, the target content is sent to the smart glasses, and the content sending device passes the first
  • the second communication method sends the target content to the smart glasses.
  • a second communication manner between the content sending device and the smart glasses may be established. Specifically, when smart glasses are detected, the content sending device acquires device information of the smart glasses, and establishes a second communication mode with the smart glasses according to the device information of the smart glasses.
  • the device information of the smart glasses may include connection configuration information such as MAC address, SSID, password, etc. used to establish the second communication method, and the device information may be configured in the NFC chip of the smart glasses.
  • the content sending device can obtain the device information of the smart glasses.
  • the second communication manner between the content sending device and the smart glasses may be established in advance. For example, when the smart glasses are started, the second communication manner with the content sending device is established.
  • the content sending device can send the target content to the smart glasses through the second communication method, so that the smart glasses can play the target content so that the user can see and/or hear it.
  • FIG. 7 shows a structural diagram of smart glasses according to an embodiment of the present disclosure.
  • the content receiving unit 71 can convert the target content to generate a signal that the light emitting unit 72 can play, and then forward it to the light emitting unit 72, by The light emitting unit 72 transmits to the image display unit 73, so that the user wearing the smart glasses can view the target content.
  • the content receiving unit 71 may also perform processes such as filtering and denoising of the target content.
  • the content receiving unit 71 can also be understood as a data combing unit of smart glasses.
  • the light emitting unit 72 includes an optical machine.
  • the image display unit 73 includes lenses on the smart glasses, and all or part of the lenses can be used as an interface for displaying target content.
  • Fig. 7 only shows the structure of playing target content by taking one side as an example, however, in some other embodiments, both lenses of the smart glasses can display target content, or different parts of target content, which is not discussed in the present disclosure. Do limit.
  • the smart glasses of the present disclosure may further include an audio playback unit for playing audio content that may be included in the target content.
  • the smart glasses may further include a task control unit, configured to respond to the user's task operation, generate a task control instruction, and send the task control instruction to the content sending device, so as to control the task.
  • a task control unit configured to respond to the user's task operation, generate a task control instruction, and send the task control instruction to the content sending device, so as to control the task.
  • the mission control unit may be configured on the temples of the smart glasses, and may include a touch sensing module, so as to generate corresponding mission control instructions in response to operations such as sliding and clicking by the user.
  • one or more physical buttons may be configured on the temples of the glasses, so that corresponding task control instructions may be generated in response to the operation of pressing the buttons by the user.
  • mapping relationship between user operations and task control may be pre-configured on the content sending device. For example, what kind of operation corresponds to suspending the task, and what kind of operation corresponds to starting the task, etc., which is not limited in the present disclosure.
  • the task can be controlled in response to the task control instruction sent by the smart glasses.
  • a voice-controlled task-based scenario can be configured.
  • the content sending device can acquire voice information.
  • the voice information may be obtained directly by the content sending device, or may be obtained by smart glasses and sent to the content sending device.
  • the content sending device can recognize the voice information to determine keywords related to mission control.
  • keywords may be pre-configured, and this disclosure does not limit this process.
  • the content sending device can control the task based on the keyword.
  • the navigation task of the content sending device is terminated. It is understandable that the disappearance of the task process will lead to the disappearance of the target content on the smart glasses.
  • step S802 the content sending device determines a target task to be sent.
  • the target task may be the target content generated based on the running task.
  • step S804 the user performs a touch operation on the content sending device and the smart glasses when the timing of reaching the sending target content is determined.
  • step S806 the content sending device sends the target content to the smart glasses.
  • step S808 the smart glasses can play the target content.
  • control process of the task can also be performed through the smart glasses.
  • step S810 the smart glasses generate a task control command in response to the task control operation.
  • step S812 the smart glasses may send the task control instruction to the content sending device.
  • step S814 the content sending and device can control the task according to the task control instruction.
  • the navigation information on the interface of the mobile phone 91 can be presented.
  • the application program associated with the smart glasses 92 is running on the mobile phone 91, the application program can obtain the navigation information from the process, and generate target content to be sent based on the navigation information.
  • the user can perform an NFC-based touch operation on the mobile phone 91 and the smart glasses 92.
  • the mobile phone 91 can send the target content to smart glasses92.
  • the target content can be displayed on the lenses of the smart glasses 92 .
  • FIG. 9 only uses one lens as an example to present information, it is also possible to present target content on both lenses, or present different parts of the target content respectively.
  • the target content presented on the smart glasses 92 may not be all navigation information.
  • the navigation information also includes at least the remaining time of the navigation, information on the next road, etc., and these information are selectively removed when the mobile phone 91 generates the target content.
  • the lens is smaller than the interface of the mobile phone, and the smart glasses need to see the real road. Therefore, in some solutions and strategies of the present disclosure, it may not be possible to present all the navigation information on the glasses lens.
  • the specific presented target content can be customized to meet the individual needs of different users.
  • the information of the current road and the direction and distance of the next road can be displayed on the smart glasses 92 . It can be understood that as the user travels, the data of the target content will change, that is to say, the information displayed on the smart glasses 92 will also change accordingly.
  • the exercise information on the interface of the mobile phone 101 can be presented.
  • the application program associated with the smart glasses 102 is running on the mobile phone 101, the application program can obtain the motion information from the process, and generate target content to be sent based on the motion information.
  • the user can perform an NFC-based touch operation on the mobile phone 101 and the smart glasses 102.
  • the mobile phone 101 can send the target content to smart glasses 102.
  • the target content can be displayed on the lenses of the smart glasses 102 .
  • the target content presented on the smart glasses 102 may not be all motion information. Instead, only the currently set jogging status (shown as an image of a person running) and the number of kilometers completed are displayed.
  • the specific presented target content may be customized by the user, which is not limited in the present disclosure.
  • the data of the target content will change, that is to say, the information displayed on the smart glasses 102 will also change accordingly.
  • the display changes.
  • the process of user participation can only be to make the content sending device and smart glasses meet the communication distance of the first communication mode, so that the content sending device can determine the smart glasses, and the rest All operations can be automatically implemented by the content sending device. Therefore, for the user, the operation of content transmission is less difficult, easy to implement, and has high convenience, and the user can control the timing of content sending; on the other hand, the present disclosure
  • the solution provides a new content transmission solution.
  • the communication result of the first communication method is used as a trigger condition to control the content sending device to send content in the second communication method.
  • the content can also be transmitted to the smart glasses through this solution, which expands the application scenarios of transmitting content to the smart glasses; on the other hand, the content sent to the smart glasses in this disclosure can be converted and generated based on tasks. It enriches the application scenarios of smart glasses and greatly improves the user experience.
  • this example implementation also provides a content processing apparatus applied to a content sending device.
  • FIG. 11 schematically shows a block diagram of a content processing apparatus according to an exemplary embodiment of the present disclosure.
  • a content processing apparatus 1100 may include a content determination module 1101 and a content transmission module 1103 .
  • the content determining module 1101 may be configured to determine the target content to be sent; the content sending module 1103 may be configured to send the target content to smart glasses so that the smart glasses can play the targeted content.
  • the content sending module 1103 may be configured to execute: in response to an operation of touching or approaching the smart glasses, determine the smart glasses through the first communication method.
  • the content processing device 1200 may further include a communication establishment module 1201 .
  • the communication establishment module 1201 may be configured to: acquire device information of the smart glasses after the smart glasses are determined based on the first communication method; establish a second communication method with the smart glasses according to the device information of the smart glasses, so that The target content is sent to the smart glasses through the second communication method.
  • the communication establishment module 1201 may be configured to perform: before the smart glasses are determined based on the first communication method, pre-establish a second communication method with the smart glasses.
  • the content determination module 1101 may be configured to: determine task information of a currently running task; convert the task information into target content.
  • the content determining module 1101 may also be configured to: respond to a user's task setting operation on the application interface, and start the task.
  • the process of converting task information into target content by the content determination module 1101 may be configured to: extract feature information from task information, where the feature information includes at least one feature item and feature data of the feature item; Determine the content configuration style corresponding to the feature item; combine the feature data of the feature item and the content configuration style corresponding to the feature item to generate the target content.
  • the content determination module 1101 may also be configured to: update the target content in real time using the generated characteristic data during the task execution process, so as to make the target content consistent with the task execution process.
  • the content processing device 1300 may further include a task control module 1301 .
  • the task control module 1301 may be configured to: control the task in response to the task control instruction sent by the smart glasses; wherein, the task control instruction is generated based on the user's control operation on the smart glasses.
  • the task control module 1301 may be configured to: acquire voice information; recognize the voice information to determine keywords related to task control; and control tasks based on the keywords.
  • the example implementations described here can be implemented by software, or by combining software with necessary hardware. Therefore, the technical solutions according to the embodiments of the present disclosure can be embodied in the form of software products, and the software products can be stored in a non-volatile storage medium (which can be CD-ROM, U disk, mobile hard disk, etc.) or on the network , including several instructions to make a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiments of the present disclosure.
  • a computing device which may be a personal computer, a server, a terminal device, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

一种内容处理方法、内容处理装置、内容处理系统、计算机可读存储介质和电子设备,涉及计算机控制技术领域。该内容处理方法包括:确定待发送的目标内容(S32);在基于第一通信方式确定出智能眼镜的情况下,通过第二通信方式将目标内容发送至智能眼镜,以便智能眼镜播放目标内容(S34)。该方法可以提升向智能眼镜传输信息的便利性。 (图3)

Description

内容处理方法、装置、系统、存储介质和电子设备
相关申请的交叉引用
本申请要求于2021年05月27日提交的申请号为202110586615.7、名称为“内容处理方法、装置、系统、存储介质和电子设备”的中国专利申请的优先权,该中国专利申请的全部内容通过引用全部并入本文。
技术领域
本公开涉及计算机控制技术领域,具体而言,涉及一种内容处理方法、内容处理装置、内容处理系统、计算机可读存储介质和电子设备。
背景技术
随着终端技术的发展,智能眼镜的开发及使用受到越来越多的重视,除具有较好的趣味性之外,更重要的是,智能眼镜通过越来越丰富的功能,能为用户提供工作和生活上的便利。
智能眼镜可以通过与其他设备的交互,来向用户呈现一些内容。然而,目前在向智能眼镜传输内容时,需要用户在设备端进行较长时间的配置操作,过程复杂,学习成本高,不利于智能眼镜功能的推广。
发明内容
根据本公开的第一方面,提供了一种内容处理方法,应用于内容发送设备,包括:确定待发送的目标内容;在基于第一通信方式确定出智能眼镜的情况下,通过第二通信方式将目标内容发送至智能眼镜,以便智能眼镜播放目标内容。
根据本公开的第二方面,提供了一种内容处理装置,应用于内容发送设备,包括:内容确定模块,被配置为确定待发送的目标内容;内容发送模块,被配置为在基于第一通信方式确定出智能眼镜的情况下,通过第二通信方式将目标内容发送至智能眼镜,以便智能眼镜播放目标内容。
根据本公开的第三方面,提供了一种内容处理系统,包括:内容发送设备,被配置为确定待发送的目标内容,在基于第一通信方式确定出智能眼镜的情况下,通过第二通信方式将目标内容发送至智能眼镜;智能眼镜,被配置为播放目标内容。
根据本公开的第四方面,提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述的内容处理方法。
根据本公开的第五方面,提供了一种电子设备,包括处理器;存储器,被配置为存储一个或多个程序,当一个或多个程序被处理器执行时,使得所述处理器实现上述的内容处理方法。
附图说明
图1示出了本公开实施方式的内容处理系统的示例性架构的示意图;
图2示出了适于用来实现本公开实施例的电子设备的结构示意图;
图3示意性示出了根据本公开示例性实施方式的内容处理方法的流程图;
图4示出了本公开实施方式的目标内容的类型的示意图;
图5示意性示出了本公开一个实施例的确定目标内容的流程图;
图6示出了本公开根据任务信息确定目标内容的过程的示意图;
图7示出了本公开实施例的智能眼镜的结构图;
图8示出了本公开实施例的内容处理方案的交互过程的示意图;
图9示出了以导航场景为例的内容处理方案的示意图;
图10示出了以运动场景为例的内容处理方案的示意图;
图11示意性示出了根据本公开示例性实施方式的内容处理装置的方框图;
图12示意性示出了根据本公开另一示例性实施方式的内容处理装置的方框图;
图13示意性示出了根据本公开又一示例性实施方式的内容处理装置的方框图。
具体实施方式
现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些实施方式使得本公开将更加全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施方式中。在下面的描述中,提供许多具体细节从而给出对本公开的实施方式的充分理解。然而,本领域技术人员将意识到,可以实践本公开的技术方案而省略所述特定细节中的一个或更多,或者可以采用其它的方法、组元、装置、步骤等。在其它情况下,不详细示出或描述公知技术方案以避免喧宾夺主而使得本公开的各方面变得模糊。
此外,附图仅为本公开的示意性图解,并非一定是按比例绘制。图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。附图中所示的一些方框图是功能实体,不一定必须与物理或逻辑上独立的实体相对应。可以采用软件形式来实现这些功能实体,或在一个或多个硬件模块或集成电路中实现这些功能实体,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。
附图中所示的流程图仅是示例性说明,不是必须包括所有的步骤。例如,有的步骤还可以分解,而有的步骤可以合并或部分合并,因此实际执行的顺序有可能根据实际情况改变。另外,下面所有的术语“第一”、“第二”仅是为了区分的目的,不应作为本公开内容的限制。
图1示出了本公开实施方式的内容处理系统的示例性架构的示意图。如图1所示,内容处理系统可以包括内容发送设备11和智能眼镜12。
内容发送设备11是用于向智能眼镜12发送内容的设备。其中,本公开实施方式中所说的内容可以指媒体内容,包括但不限于图像、音频等。内容发送设备11可以是任意能够与智能眼镜12进行通信连接的设备,包括但不限于智能手机、平板电脑、智能手表等。
在本公开实施方式中,内容发送设备11可以用于确定待发送的目标内容,在基于第一通信方式确定出智能眼镜12的情况下,通过第二通信方式将该目标内容发送至智能眼镜12。通常,第二通信方式相对于第一通信方式,传输距离远。
在智能眼镜12获取到目标内容后,可以播放该目标内容。
例如,在目标内容包含图像内容的情况下,参考图1,智能眼镜12可以包括内容接收单元121、光发射单元123和图像显示单元125。
具体的,内容接收单元121可以用于基于第二通信方式接收由内容发送设备11发送的图像内容。光发射单元123可以用于播放该图像内容。而图像显示单元125可以用于显示光发射单元123播放的图像内容。
可以理解的是,在一些实施例中,光发射单元123可以包括智能眼镜12上配备的光机,图像显示单元123可以包括智能眼镜12的镜片。
又例如,在目标内容包括音频内容的情况下,智能眼镜12还可以包括音频播放单元127,音频播放单元127用于播放该音频内容。
再例如,在目标内容既包括图像内容和音频内容的情况下,智能眼镜12可以控制它们同时播放或分别播放。
另外,由内容发送设备11发送的目标内容可以是基于当前运行的任务的任务信息经转化而得到的内容。在这种情况下,参考图1,智能眼镜12还可以包括任务控制单元129。
具体的,该任务控制单元129可以用于响应用户的任务控制操作,生成任务控制指令,并向内容发送设备11发送该任务控制指令。内容发送设备11可以响应该任务控制指令对当前运行的该任务进行控制,例如,暂停任务、开启任务、终止任务等。
需要说明的是,本公开示例性实施方式的内容处理方法一般由内容发送设备11执行,相应地,下面描述的内容处理装置一般配置在内容发送设备11中。
图2示出了适于用来实现本公开示例性实施方式的电子设备的示意图。本公开示例性实施方式的内容发送设备可以被配置为如图2的形式。需要说明的是,图2示出的电子设备仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
本公开的电子设备至少包括处理器和存储器,存储器被配置为存储一个或多个程序,当一个或多个程序被处理器执行时,使得处理器可以实现本公开示例性实施方式的内容处理方法。
具体的,如图2所示,电子设备200可以包括:处理器210、内部存储器221、外部存储器接口222、通用串行总线(Universal Serial Bus,USB)接口230、充电管理模块240、电源管理模块241、电池242、天线1、天线2、移动通信模块250、无线通信模块260、音频模块270、扬声器271、受话器272、麦克风273、耳机接口274、传感器模块280、显示屏290、摄像模组291、指示器292、马达293、按键294以及用户标识模块(Subscriber Identification Module,SIM)卡接口295等。其中传感器模块280可以包括深度传感器、压力传感器、陀螺仪传感器、气压传感器、磁传感器、加速度传感器、距离传感器、接近光传感器、指纹传感器、温度传感器、触摸传感器、环境光传感器及骨传导传感器等。
可以理解的是,本公开实施例示意的结构并不构成对电子设备200的具体限定。在本公开另一些实施例中,电子设备200可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件、软件或软件和硬件的组合实现。
处理器210可以包括一个或多个处理单元,例如:处理器210可以包括应用处理器(Application Processor,AP)、调制解调处理器、图形处理器(Graphics Processing Unit,GPU)、图像信号处理器(Image Signal Processor,ISP)、控制器、视频编解码器、数字信号处理器(Digital Signal Processor,DSP)、基带处理器和/或神经网络处理器(Neural-network Processing Unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。另外,处理器210中还可以设置存储器,被配置为存储指令和数据。
电子设备200的无线通信功能可以通过天线1、天线2、移动通信模块250、无线通信模块260、调制解调处理器以及基带处理器等实现。
移动通信模块250可以提供应用在电子设备200上的包括2G/3G/4G/5G等无线通信的解决方案。
无线通信模块260可以提供应用在电子设备200上的包括无线局域网(Wireless Local Area Networks,WLAN)(如无线保真(Wireless Fidelity,Wi-Fi)网络)、蓝牙(Bluetooth,BT)、全球导航卫星系统(Global Navigation Satellite System,GNSS)、调频(Frequency Modulation,FM)、近距离无线通信技术(Near Field Communication,NFC)、超宽带通信技术(Ultra Wide Band,UWB)、红外技术(Infrared,IR)等无线通信的解决方案。
电子设备200通过GPU、显示屏290及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏290和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器210可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
电子设备200可以通过ISP、摄像模组291、视频编解码器、GPU、显示屏290及应用处理器等实现拍摄功能。在一些实施例中,电子设备200可以包括1个或N个摄像模组291,N为大于1的正整数,若电子设备200包括N个摄像头,N个摄像头中有一个是主摄像头。
内部存储器221可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。 内部存储器221可以包括存储程序区和存储数据区。外部存储器接口222可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备200的存储能力。
本公开还提供了一种计算机可读存储介质,该计算机可读存储介质可以是上述实施例中描述的电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读存储介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读存储介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、RF等等,或者上述的任意合适的组合。
计算机可读存储介质承载有一个或者多个程序,当上述一个或者多个程序被一个该电子设备执行时,使得该电子设备实现如下述实施例中所述的方法。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,上述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图或流程图中的每个方框、以及框图或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现,所描述的单元也可以设置在处理器中。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定。
图3示意性示出了本公开的示例性实施方式的内容处理方法的流程图。参考图3,所述内容处理方法可以包括以下步骤:
S32.确定待发送的目标内容。
在本公开的示例性实施方式中,目标内容可以指媒体内容,包括但不限于图像、音频等。
参考图4,目标内容的类型可以包括图片、视频、文字、符号、音频等。其中,图片可以包括静态图像和动态图像;视频可以被理解为是一系列连续的图像;文字可以基于图像的方式呈现出,文字中还可以包括数字;符号指的是计算机系统自带的符号,也可以是用户自行绘制出的符号。
根据本公开的一些实施例,内容发送设备可以存储有目标内容。在这种情况下,内容发送设备可以响应用户的内容选择操作,从存储的内容中选择出待发送的目标内容。例如,用户从相册中选择出一张或多张照片作为目标内容。又例如,用户从存储的音乐文件中选择出一首或多首音乐作为目标内容。
根据本公开的另一些实施例,内容发送设备可以从其他设备或服务器获取内容,并将获取到的内容作为待发送的目标内容。
根据本公开的又一些实施例,目标内容是基于当前运行的任务的任务信息转化而得到的。图5示出了这种情况下确定目标内容的过程。
在步骤S502中,内容发送设备可以确定当前运行的一任务的任务信息。
具体的,内容发送设备上安装有用于传输内容的应用程序,即本公开方案中内容发送设备上安装有关联智能眼镜的应用程序,在这种情况下,可以通过该应用程序实时抓取进程的任务,以确定出对应的任务信息。
其中,该应用程序仅能抓取已获得权限的其他应用程序的进程任务,例如,在安装智能眼镜的应用程序时或者后期修改配置时,可以构建应用程序的白名单,如弹出应用程序列表以供用户选择出白名单的应用程序。对于属于白名单的应用程序,智能眼镜的应用程序可以获取到它们的进程任务。
另外,还可以配置任务的白名单,也就是说,不是白名单应用程序的所有任务都能够被智能眼镜的应用程序抓取,而是仅任务白名单中的任务才能被智能眼镜的应用程序抓取。本公开对应用程序的白名单以及任务的白名单均不做限制。
需要说明的是,确定出当前运行的任务可以是内容发送设备界面上当前呈现出的任务,也可以是界面上未显示而后台运行的任务。这些任务例如为导航APP的任务、运动健身APP的任务、支付APP的任务等,
可以响应用户在应用界面上的任务设置操作,启动任务。以导航APP为例,任务设置操作可以是设置目的地并触发启动导航的操作。
在步骤S504中,内容发送设备可以将任务信息转化为目标内容。
在本公开一个实施例中,对于简单的任务而言,例如对于倒计时的任务,可以直接将任务信息转化为目标内容。具体的,可以获取该任务的内容配置样式,即转化后内容呈现的样子,包括但不限于字体大小、字体颜色、图像尺寸、图像颜色、图像样式、内容呈现位置、内容呈现透明度等。然后,将任务当前的数据与内容配置样式结合,以生成目标内容。仍以倒计时为例,将剩余时间与对应的内容配置样式结合,以生成目标内容。
在本公开的另一个实施例中,任务信息较多,而生成目标内容所用的信息并不是全部的任务信息,则需要进一步地信息提取操作。
参考图6,首先,可以从任务信息中提取出特征信息,其中,特征信息包含至少一个特征项以及特征项的特征数据。以导航任务为例,任务信息可能包括当前时间、目的地信息、整个导航路线的信息、导航剩余时间的信息等。而生成的目标内容可能仅需要当前行进路线的信息,不关注其他方面的信息,在这种情况下,可以将当前行进路线作为特征项,将当前行进路线的具体数据作为特征数据,特征数据可以例如包括当前所处的道路名称,剩余多少距离转向等。
需要说明的是,对于一个任务,哪一个或哪些特征作为要提取的特征,可以预先由用户自定义配置,本公开对此不做限制。
接下来,内容发送设备可以确定特征项对应的内容配置样式。具体的,可以预先构建特征项与内容配置样式的映射关系,可以根据预先构建的映射关系确定出特征项对应的内容配置样式。其中,内容配置样式包括但不限于字体大小、字体颜色、图像尺寸、图像颜色、图像样式、内容呈现位置、内容呈现透明度等。
然后,结合特征项的特征数据以及特征项对应的内容配置样式,生成目标内容。
应当理解的是,在内容发送设备执行任务的过程中,鉴于特征数据通常会发生变化,内容发送设备可以利用产生的特征数据实时更新目标内容,以使目标内容与任务执行的过程一致。
S34.在基于第一通信方式确定出智能眼镜的情况下,通过第二通信方式将目标内容发送至智能眼镜,以便智能眼镜播放目标内容。
在本公开的示例性实施方式中,第一通信方式与第二通信方式是不同的通信方式。例如,第一通信方式可以是NFC或UWB方式,第二通信方式可以是蓝牙或WiFi P2P方式。在一些实施例中,第一通信方式的传输距离小于第二通信方式的传输距离。
可以响应内容发送设备触碰或靠近智能眼镜的操作,通过第一通信方式确定出智能眼镜。
例如,可以响应用户针对内容发送设备与智能眼镜的碰一碰操作,使内容发送设备确定出智能眼镜。具体的,通过NFC碰一碰的方式,内容发送设备可以获取到智能眼镜的设备信息,通过该设备信息即可确定出智能眼镜。
在本公开方案中,可以将内容发送设备确定出智能眼镜的结果作为发送目标内容的触发条件,也就是说,在确定出智能眼镜时,向智能眼镜发送目标内容,且内容发送设备是通过第二通信方式向智能眼镜发送目标内容。
根据本公开的一些实施例,可以在确定出智能眼镜之后,建立内容发送设备与智能眼镜的第二通信方式。具体的,当检测到智能眼镜时,内容发送设备获取智能眼镜的设备信息,根据智能眼镜的设备信息,建立与智能眼镜的第二通信方式。
其中,智能眼镜的设备信息可以包括建立第二通信方式所使用的例如MAC地址、SSID、密码等的连接配置信息,设备信息可以配置在智能眼镜的NFC芯片中。由此,基于碰一碰的操作,内容发送设备可以获取到智能眼镜的设备信息。
根据本公开的另一些实施例,可以在内容发送设备确定出智能眼镜之前,预先建立内容发送设备与智能眼镜的第二通信方式。例如,在智能眼镜启动时,即建立与内容发送设备的第二通信方式。
内容发送设备可以通过第二通信方式将目标内容发送给智能眼镜,由此,智能眼镜可以播放该目标内容,以便用户看到和/或听到。
图7示出了本公开实施例的智能眼镜的结构图。
参考图7,以目标内容为图像内容为例,内容接收单元71接收到目标内容后,可以将目标内容进行转化,生成光发射单元72能够播放的信号,再转递给光发射单元72,由光发射单元72传输至图像显示单元73上,由此,佩戴该智能眼镜的用户即可查看到目标内容。
另外,内容接收单元71还可以执行对目标内容的滤波、去噪等过程。也可以将内容接收单元71理解为智能眼镜的数据梳理单元。光发射单元72包括光机。图像显示单元73包括智能眼镜上的镜片,镜片的全部或部分可以作为显示目标内容的界面。
图7仅以一侧为例示出了目标内容播放的结构,然而,在另一些实施例中,智能眼镜的两个镜片均可以显示出目标内容,或目标内容的不同部分,本公开对此不做限制。
虽然图7中未示出,但本公开的智能眼镜还可以包括音频播放单元,用于播放目标内容中可能包含的音频内容。
此外,智能眼镜还可以包括任务控制单元,用于响应用户的任务操作操作,生成任务控制指令,并将任务控制指令发送给内容发送设备,以便对任务进行控制。
具体的,任务控制单元可以配置于智能眼镜的眼镜腿上,可以包括触摸感测模块,以便可以响应用户的滑动、点击等操作来生成对应的任务控制指令。或者,眼镜腿上可以配置有实体的一个或多个按键,以便可以响应用户按压按键的操作生成对应的任务控制指令。
另外,可以在内容发送设备上预先配置用户的操作与任务控制的映射关系。例如,何种操作对应暂停任务,何种操作又对应开始任务等,本公开对此不做限制。
对于内容发送设备,可以响应智能眼镜发送的任务控制指令,对任务进行控制。
除通过智能眼镜的控制操作来控制任务之外,考虑到一些例如运动健身的场景,用户可能不便手动操作。在这种情况下,可以配置基于语音控制任务的方案。
首先,内容发送设备可以获取语音信息。其中,语音信息可以是内容发送设备直接获取到的,也可以是由智能眼镜获取到并发送给内容发送设备。
接下来,内容发送设备可以对语音信息进行识别,确定与任务控制相关的关键词。具体的,关键词可以预先配置出,本公开对此过程不做限制。
然后,内容发送设备可以基于关键词对任务进行控制。
例如,当用户说出“停止导航”时,内容发送设备的导航任务被终止。可以理解的是,任务进程的消失,会导致智能眼镜上目标内容的消失。
下面将参考图8对本公开实施例的内容处理的交互过程进行说明。
在步骤S802中,内容发送设备确定待发送的目标任务。如上所述,该目标任务可以是基于运行的任务生成的目标内容。
在步骤S804中,用户在确定出到达发送目标内容的时机时,执行针对内容发送设备与智能眼镜的碰一碰操作。
在步骤S806中,内容发送设备向智能眼镜发送目标内容。
在步骤S808中,智能眼镜可以播放目标内容。
此后,还可以通过智能眼镜执行任务的控制过程。
在步骤S810中,智能眼镜响应任务控制操作,生成任务控制指令。
在步骤S812中,智能眼镜可以将任务控制指令发送给内容发送设备。
在步骤S814中,内容发送和设备可以根据任务控制指令对任务进行控制。
下面以导航场景为例,对本公开的内容处理方案进行说明。
参考图9,用户在手机91上设置了目的地并启动导航之后,可以呈现出手机91界面上的导航信息。在手机91上与智能眼镜92关联的应用程序运行的情况下,该应用程序可以从进程中获取到该导航信息,并基于该导航信息生成待发送的目标内容。
在用户需要将导航信息显示在智能眼镜92上的情况下,用户可以针对手机91与智能眼镜92执行基于NFC的碰一碰操作,在这种情况下,手机91可以通过蓝牙将目标内容发送给智能眼镜92。由此,智能眼镜92的镜片上可以显示出该目标内容。
一方面,虽然图9仅以一个镜片为例进行信息的呈现,然而,还可以在两个镜片上均呈现目标内容,或分别呈现目标内容的不同部分。
另一方面,从图9中可以看出,智能眼镜92上呈现的目标内容可以不是全部的导航信息。从手机91界面上显示的内容可以看出,导航信息还至少包括导航剩余时间、下一道路的信息等,这些信息在手机91生成目标内容的过程中被选择性的剔除,这也是考虑到眼镜镜片相对于手机界面较小,且智能眼镜还需看到真实的道路,因此在本公开的一些方案策略中,可能无法将所有的导航信息均呈现在眼镜镜片上。另外,具体呈现的目标内容可以自定义,以满足不同用户的个性化需求。
如图9所示,在智能眼镜92可以显示出当前道路的信息以及下个道路的方向和距离。可以理解的是,随着用户的行进,目标内容的数据会发生变化,也就是说,智能眼镜92上显示的信息也会随之变化。
下面以运动场景为例,对本公开的内容处理方案进行说明。
参考图10,用户在手机101上进行慢跑设置之后,可以呈现出手机101界面上的运动信息。在手机101上与智能眼镜102关联的应用程序运行的情况下,该应用程序可以从进程中获取到该运动信息,并基于该运动信息生成待发送的目标内容。
在用户需要将运动信息显示在智能眼镜102上的情况下,用户可以针对手机101与智能眼镜102执行基于NFC的碰一碰操作,在这种情况下,手机101可以通过蓝牙将目标内容发送给智能眼镜102。由此,智能眼镜102的镜片上可以显示出该目标内容。
类似地,一方面,还可以在两个镜片上呈现目标内容,或分别呈现目标内容的不同部分。
另一方面,智能眼镜102上呈现的目标内容可以不是全部的运动信息。而仅显示当前设置的慢跑状态(以人跑步的图像示出)以及已完成的公里数。然而,具体呈现的目标内容可以由用户自定义,本公开对此不做限制。
另外,随着用户不断慢跑,目标内容的数据会发生变化,也就是说,智能眼镜102上 显示的信息也会随之变化。参考图10,至少在已完成公里数方面,显示的内容会发生变化。
综上所述,通过本公开的内容处理方法,一方面,用户参与的过程可以仅在于使内容发送设备与智能眼镜满足第一通信方式的通信距离,以便内容发送设备确定出智能眼镜,其余的操作均可以由内容发送设备自动实现,由此,对于用户而言,内容传输的操作难度低,易实施,具有较高的便捷性,且用户能够控制内容发送的时机;另一方面,本公开方案提供了一种新的传输内容的方案,以第一通信方式的通信结果作为触发条件,控制内容发送设备以第二通信方式发送内容,可以设想的是,如果场景中第一通信方式不便或无法传输内容,则通过本方案也可以将内容传输给智能眼镜,扩展了向智能眼镜传输内容的应用场景;再一方面,本公开向智能眼镜发送的内容可以是基于任务而转化生成的内容,丰富了智能眼镜的应用场景,极大提高了用户的使用体验。
应当注意,尽管在附图中以特定顺序描述了本公开中方法的各个步骤,但是,这并非要求或者暗示必须按照该特定顺序来执行这些步骤,或是必须执行全部所示的步骤才能实现期望的结果。附加的或备选的,可以省略某些步骤,将多个步骤合并为一个步骤执行,以及/或者将一个步骤分解为多个步骤执行等。
进一步的,本示例实施方式中还提供了一种应用于内容发送设备的内容处理装置。
图11示意性示出了本公开的示例性实施方式的内容处理装置的方框图。参考图11,根据本公开的示例性实施方式的内容处理装置1100可以包括内容确定模块1101和内容发送模块1103。
具体的,内容确定模块1101可以被配置为确定待发送的目标内容;内容发送模块1103可以被配置为在基于第一通信方式确定出智能眼镜的情况下,通过第二通信方式将目标内容发送至智能眼镜,以便智能眼镜播放目标内容。
根据本公开的示例性实施例,内容发送模块1103可以被配置为执行:响应触碰或靠近智能眼镜的操作,通过第一通信方式确定出智能眼镜。
根据本公开的示例性实施例,参考图12,相比于内容处理装置1100,内容处理装置1200还可以包括通信建立模块1201。
具体的,通信建立模块1201可以被配置为执行:在基于第一通信方式确定出智能眼镜之后,获取智能眼镜的设备信息;根据智能眼镜的设备信息,建立与智能眼镜的第二通信方式,以便通过第二通信方式将目标内容发送至智能眼镜。
根据本公开的示例性实施例,通信建立模块1201可以被配置为执行:在基于第一通信方式确定出智能眼镜之前,预先建立与智能眼镜的第二通信方式。
根据本公开的示例性实施例,内容确定模块1101可以被配置为执行:确定当前运行的一任务的任务信息;将任务信息转化为目标内容。
根据本公开的示例性实施例,内容确定模块1101还可以被配置为执行:响应用户在应用界面上的任务设置操作,启动任务。
根据本公开的示例性实施例,内容确定模块1101将任务信息转化为目标内容的过程可以被配置为执行:从任务信息中提取特征信息,特征信息包含至少一个特征项以及特征项的特征数据;确定特征项对应的内容配置样式;结合特征项的特征数据以及特征项对应的内容配置样式,生成目标内容。
根据本公开的示例性实施例,内容确定模块1101还可以被配置为执行:在任务执行的过程中,利用产生的特征数据实时更新目标内容,以使目标内容与任务执行的过程一致。
根据本公开的示例性实施例,参考图13,相比于内容处理装置1100,内容处理装置1300还可以包括任务控制模块1301。
具体的,任务控制模块1301可以被配置为执行:响应智能眼镜发送的任务控制指令,对任务进行控制;其中,任务控制指令基于用户针对智能眼镜的控制操作而生成。
根据本公开的示例性实施例,任务控制模块1301可以被配置为执行:获取语音信息; 对语音信息进行识别,确定与任务控制相关的关键词;基于关键词对任务进行控制。
由于本公开实施方式的内容处理装置的各个功能模块与上述方法实施方式中相同,因此在此不再赘述。
通过以上的实施方式的描述,本领域的技术人员易于理解,这里描述的示例实施方式可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本公开实施方式的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、终端装置、或者网络设备等)执行根据本公开实施方式的方法。
此外,上述附图仅是根据本公开示例性实施例的方法所包括的处理的示意性说明,而不是限制目的。易于理解,上述附图所示的处理并不表明或限制这些处理的时间顺序。另外,也易于理解,这些处理可以是例如在多个模块中同步或异步执行的。
应当注意,尽管在上文详细描述中提及了用于动作执行的设备的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本公开的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。
本领域技术人员在考虑说明书及实践这里公开的内容后,将容易想到本公开的其他实施例。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限。

Claims (20)

  1. 一种内容处理方法,应用于内容发送设备,包括:
    确定待发送的目标内容;
    在基于第一通信方式确定出智能眼镜的情况下,通过第二通信方式将所述目标内容发送至所述智能眼镜,以便所述智能眼镜播放所述目标内容。
  2. 根据权利要求1所述的内容处理方法,其中,基于第一通信方式确定出智能眼镜包括:
    响应触碰或靠近所述智能眼镜的操作,通过所述第一通信方式确定出所述智能眼镜。
  3. 根据权利要求2所述的内容处理方法,其中,在基于第一通信方式确定出智能眼镜之后,所述内容处理方法还包括:
    获取所述智能眼镜的设备信息;
    根据所述智能眼镜的设备信息,建立与所述智能眼镜的所述第二通信方式,以便通过所述第二通信方式将所述目标内容发送至所述智能眼镜。
  4. 根据权利要求2所述的内容处理方法,其中,在基于第一通信方式确定出智能眼镜之前,所述内容处理方法还包括:
    预先建立与所述智能眼镜的所述第二通信方式。
  5. 根据权利要求1至4中任一项所述的内容处理方法,其中,确定待发送的目标内容包括:
    确定当前运行的一任务的任务信息;
    将所述任务信息转化为所述目标内容。
  6. 根据权利要求5所述的内容处理方法,其中,所述内容处理方法还包括:
    响应用户在应用界面上的任务设置操作,启动所述任务。
  7. 根据权利要求5所述的内容处理方法,其中,将所述任务信息转化为所述目标内容包括:
    从所述任务信息中提取特征信息,所述特征信息包含至少一个特征项以及所述特征项的特征数据;
    确定所述特征项对应的内容配置样式;
    结合所述特征项的特征数据以及所述特征项对应的内容配置样式,生成所述目标内容。
  8. 根据权利要求7所述的内容处理方法,其中,所述内容处理方法还包括:
    在所述任务执行的过程中,利用产生的特征数据实时更新所述目标内容,以使所述目标内容与所述任务执行的过程一致。
  9. 根据权利要求8所述的内容处理方法,其中,所述内容处理方法还包括:
    响应所述智能眼镜发送的任务控制指令,对所述任务进行控制;
    其中,所述任务控制指令基于用户针对所述智能眼镜的控制操作而生成。
  10. 根据权利要求8所述的内容处理方法,其中,所述内容处理方法还包括:
    获取语音信息;
    对所述语音信息进行识别,确定与任务控制相关的关键词;
    基于所述关键词对所述任务进行控制。
  11. 根据权利要求1所述的内容处理方法,其中,所述第一通信方式的传输距离小于所述第二通信方式的传输距离。
  12. 一种内容处理装置,应用于内容发送设备,包括:
    内容确定模块,被配置为确定待发送的目标内容;
    内容发送模块,被配置为在基于第一通信方式确定出智能眼镜的情况下,通过第二通信方式将所述目标内容发送至所述智能眼镜,以便所述智能眼镜播放所述目标内容。
  13. 一种内容处理系统,包括:
    内容发送设备,被配置为确定待发送的目标内容,在基于第一通信方式确定出智能眼镜的情况下,通过第二通信方式将所述目标内容发送至所述智能眼镜;
    所述智能眼镜,被配置为播放所述目标内容。
  14. 根据权利要求13所述的内容处理系统,其中,所述目标内容包括图像内容,其中,所述智能眼镜包括:
    内容接收单元,被配置为基于所述第二通信方式接收由所述内容发送设备发送的所述图像内容;
    光发射单元,被配置为播放所述图像内容;
    图像显示单元,被配置为显示所述图像内容。
  15. 根据权利要求14所述的内容处理系统,其中,所述目标内容还包括音频内容,其中,所述智能眼镜还包括:
    音频播放单元,被配置为播放所述音频内容。
  16. 根据权利要求13至15中任一项所述的内容处理系统,其中,所述内容发送设备确定所述目标内容的过程被配置为执行:确定当前运行的一任务的任务信息,并将所述任务信息转化为所述目标内容。
  17. 根据权利要求16所述的内容处理系统,其中,所述智能眼镜还包括:
    任务控制单元,被配置为响应用户的任务控制操作,生成任务控制指令,并向所述内容发送设备发送所述任务控制指令,以便所述内容发送设备对所述任务进行控制。
  18. 根据权利要求13所述的内容处理系统,其中,所述第一通信方式的传输距离小于所述第二通信方式的传输距离。
  19. 一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现如权利要求1至11中任一项所述的内容处理方法。
  20. 一种电子设备,包括:
    处理器;
    存储器,被配置为存储一个或多个程序,当所述一个或多个程序被所述处理器执行时,使得所述处理器实现如权利要求1至11中任一项所述的内容处理方法。
PCT/CN2022/077137 2021-05-27 2022-02-21 内容处理方法、装置、系统、存储介质和电子设备 WO2022247363A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110586615.7 2021-05-27
CN202110586615.7A CN113329375B (zh) 2021-05-27 2021-05-27 内容处理方法、装置、系统、存储介质和电子设备

Publications (1)

Publication Number Publication Date
WO2022247363A1 true WO2022247363A1 (zh) 2022-12-01

Family

ID=77421927

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/077137 WO2022247363A1 (zh) 2021-05-27 2022-02-21 内容处理方法、装置、系统、存储介质和电子设备

Country Status (2)

Country Link
CN (1) CN113329375B (zh)
WO (1) WO2022247363A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160034042A1 (en) * 2014-07-31 2016-02-04 Samsung Electronics Co., Ltd. Wearable glasses and method of providing content using the same
WO2016095422A1 (zh) * 2014-12-17 2016-06-23 中兴通讯股份有限公司 眼镜、显示终端、图像显示处理系统及方法
CN109890012A (zh) * 2018-12-29 2019-06-14 北京旷视科技有限公司 数据传输方法、装置、系统和存储介质
CN109996348A (zh) * 2017-12-29 2019-07-09 中兴通讯股份有限公司 智能眼镜与智能设备交互的方法、系统及存储介质
CN112130788A (zh) * 2020-08-05 2020-12-25 华为技术有限公司 一种内容分享方法及其装置
CN112269468A (zh) * 2020-10-23 2021-01-26 深圳市恒必达电子科技有限公司 基于蓝牙、2.4g、wifi连接获取云端资讯的人机交互智能眼镜、方法及其平台

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105162497B (zh) * 2015-08-04 2018-11-16 天地融科技股份有限公司 一种数据传输方法、终端、电子签名设备及系统
US10175753B2 (en) * 2015-10-20 2019-01-08 Bragi GmbH Second screen devices utilizing data from ear worn device system and method
CN107979830B (zh) * 2017-11-21 2020-11-27 大众问问(北京)信息科技有限公司 一种智能后视镜的蓝牙连接方法、装置、设备及存储介质
CN108600632B (zh) * 2018-05-17 2021-04-20 Oppo(重庆)智能科技有限公司 拍照提示方法、智能眼镜及计算机可读存储介质
CN111367407B (zh) * 2020-02-24 2023-10-10 Oppo(重庆)智能科技有限公司 智能眼镜交互方法、智能眼镜交互装置及智能眼镜
CN111479148B (zh) * 2020-04-17 2022-02-08 Oppo广东移动通信有限公司 可穿戴设备、眼镜终端、处理终端、数据交互方法与介质
CN112732217A (zh) * 2020-12-30 2021-04-30 深圳增强现实技术有限公司 5g消息的智能眼镜的信息交互方法、终端和存储介质
CN112817665A (zh) * 2021-01-22 2021-05-18 北京小米移动软件有限公司 设备交互方法及装置、存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160034042A1 (en) * 2014-07-31 2016-02-04 Samsung Electronics Co., Ltd. Wearable glasses and method of providing content using the same
WO2016095422A1 (zh) * 2014-12-17 2016-06-23 中兴通讯股份有限公司 眼镜、显示终端、图像显示处理系统及方法
CN109996348A (zh) * 2017-12-29 2019-07-09 中兴通讯股份有限公司 智能眼镜与智能设备交互的方法、系统及存储介质
CN109890012A (zh) * 2018-12-29 2019-06-14 北京旷视科技有限公司 数据传输方法、装置、系统和存储介质
CN112130788A (zh) * 2020-08-05 2020-12-25 华为技术有限公司 一种内容分享方法及其装置
CN112269468A (zh) * 2020-10-23 2021-01-26 深圳市恒必达电子科技有限公司 基于蓝牙、2.4g、wifi连接获取云端资讯的人机交互智能眼镜、方法及其平台

Also Published As

Publication number Publication date
CN113329375B (zh) 2023-06-27
CN113329375A (zh) 2021-08-31

Similar Documents

Publication Publication Date Title
US12019864B2 (en) Multimedia data playing method and electronic device
WO2021213120A1 (zh) 投屏方法、装置和电子设备
WO2021078284A1 (zh) 一种内容接续方法及电子设备
CN107005739B (zh) 用于基于语音的设备的外部视觉交互
KR102415870B1 (ko) 적응적으로 작업 수행의 주체를 변경하기 위한 장치 및 방법
CN112394895B (zh) 画面跨设备显示方法与装置、电子设备
WO2020216156A1 (zh) 投屏方法和计算设备
CN110022489B (zh) 视频播放方法、装置及存储介质
WO2022100304A1 (zh) 应用内容跨设备流转方法与装置、电子设备
WO2020125365A1 (zh) 音视频处理方法、装置、终端及存储介质
EP4123444A1 (en) Voice information processing method and apparatus, and storage medium and electronic device
WO2020134560A1 (zh) 直播间切换方法、装置、终端、服务器及存储介质
JP7173670B2 (ja) 音声制御コマンド生成方法および端末
CN110334352A (zh) 引导信息显示方法、装置、终端及存储介质
CN112188461A (zh) 近场通信装置的控制方法及装置、介质和电子设备
CN113238727A (zh) 屏幕切换方法及装置、计算机可读介质和电子设备
CN111492678A (zh) 一种文件传输方法及电子设备
US20230409192A1 (en) Device Interaction Method, Electronic Device, and Interaction System
WO2023284355A1 (zh) 信息处理方法、装置、系统、存储介质和电子设备
WO2021121036A1 (zh) 一种折叠设备的自定义按键方法、设备及存储介质
WO2022247363A1 (zh) 内容处理方法、装置、系统、存储介质和电子设备
CN113805825B (zh) 设备之间的数据通信方法、设备及可读存储介质
CN115730091A (zh) 批注展示方法、装置、终端设备及可读存储介质
CN113407318A (zh) 操作系统切换方法及装置、计算机可读介质和电子设备
CN111367492A (zh) 网页页面展示方法及装置、存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22810097

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22810097

Country of ref document: EP

Kind code of ref document: A1