CN113329375A - Content processing method, device, system, storage medium and electronic equipment - Google Patents

Content processing method, device, system, storage medium and electronic equipment Download PDF

Info

Publication number
CN113329375A
CN113329375A CN202110586615.7A CN202110586615A CN113329375A CN 113329375 A CN113329375 A CN 113329375A CN 202110586615 A CN202110586615 A CN 202110586615A CN 113329375 A CN113329375 A CN 113329375A
Authority
CN
China
Prior art keywords
content
task
processing method
smart glasses
content processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110586615.7A
Other languages
Chinese (zh)
Other versions
CN113329375B (en
Inventor
林鼎豪
陈碧莹
刘章奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110586615.7A priority Critical patent/CN113329375B/en
Publication of CN113329375A publication Critical patent/CN113329375A/en
Priority to PCT/CN2022/077137 priority patent/WO2022247363A1/en
Application granted granted Critical
Publication of CN113329375B publication Critical patent/CN113329375B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/60Subscription-based services using application servers or record carriers, e.g. SIM application toolkits
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/162Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/70Services for machine-to-machine communication [M2M] or machine type communication [MTC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The disclosure provides a content processing method, a content processing device, a content processing system, a computer readable storage medium and an electronic device, and relates to the technical field of computer control. The content processing method comprises the following steps: determining target content to be sent; and under the condition that the smart glasses are determined based on the first communication mode, the target content is sent to the smart glasses through the second communication mode, so that the smart glasses can play the target content. This disclosure can promote the convenience to intelligent glasses transmission information.

Description

Content processing method, device, system, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer control technologies, and in particular, to a content processing method, a content processing apparatus, a content processing system, a computer-readable storage medium, and an electronic device.
Background
Along with the development of terminal technology, the development and the use of intelligent glasses receive more and more attention, except having better interest, more importantly, intelligent glasses can provide convenience in work and life for the user through more and more abundant function.
Smart glasses may present some content to a user through interaction with other devices. However, when content is transmitted to the smart glasses at present, a user needs to perform configuration operation for a long time at the device end, the process is complex, the learning cost is high, and the popularization of the functions of the smart glasses is not facilitated.
Disclosure of Invention
The present disclosure provides a content processing method, a content processing apparatus, a content processing system, a computer-readable storage medium, and an electronic device, thereby overcoming, at least to some extent, the problem of inconvenience in transmitting information to smart glasses.
According to a first aspect of the present disclosure, there is provided a content processing method applied to a content transmission apparatus, including: determining target content to be sent; and under the condition that the smart glasses are determined based on the first communication mode, the target content is sent to the smart glasses through the second communication mode, so that the smart glasses can play the target content.
According to a second aspect of the present disclosure, there is provided a content processing apparatus applied to a content transmission device, including: the content determining module is used for determining target content to be sent; and the content sending module is used for sending the target content to the intelligent glasses through the second communication mode under the condition that the intelligent glasses are determined based on the first communication mode so that the intelligent glasses can play the target content.
According to a third aspect of the present disclosure, there is provided a content processing system comprising: the content sending equipment is used for determining target content to be sent, and sending the target content to the intelligent glasses through the second communication mode under the condition that the intelligent glasses are determined based on the first communication mode; and the intelligent glasses are used for playing the target content.
According to a fourth aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the content processing method described above.
According to a fifth aspect of the present disclosure, there is provided an electronic device comprising a processor; a memory for storing one or more programs which, when executed by the processor, cause the processor to implement the content processing method described above.
In the technical solutions provided by some embodiments of the present disclosure, the content sending device determines target content to be sent, and when the smart glasses are determined based on the first communication mode, the content sending device sends the target content to the smart glasses through the second communication mode, so that the smart glasses play the target content. On one hand, in the scheme, the process of participation of the user can only be that the content sending equipment and the intelligent glasses meet the communication distance of the first communication mode so that the content sending equipment can determine the intelligent glasses, and the rest operations can be automatically realized by the content sending equipment, so that for the user, the operation difficulty of content transmission is low, the implementation is easy, the convenience is high, and the user can control the content sending time; on the other hand, the present disclosure provides a new content transmission scheme, where the content transmission device is controlled to transmit the content in the second communication mode with the communication result of the first communication mode as the trigger condition, and it is assumed that if the first communication mode is inconvenient or the content cannot be transmitted in the scene, the content may also be transmitted to the smart glasses through the new content transmission scheme, so as to expand the application scene of transmitting the content to the smart glasses.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
FIG. 1 shows a schematic diagram of an exemplary architecture of a content processing system of an embodiment of the present disclosure;
FIG. 2 illustrates a schematic structural diagram of an electronic device suitable for use in implementing embodiments of the present disclosure;
FIG. 3 schematically shows a flow chart of a content processing method according to an exemplary embodiment of the present disclosure;
FIG. 4 illustrates a schematic diagram of types of targeted content of an embodiment of the present disclosure;
FIG. 5 schematically illustrates a flow chart for determining targeted content according to one embodiment of the present disclosure;
FIG. 6 is a schematic diagram illustrating a process of the present disclosure for determining targeted content based on task information;
FIG. 7 illustrates a block diagram of smart glasses according to an embodiment of the present disclosure;
FIG. 8 shows a schematic diagram of an interaction process of a content processing scheme of an embodiment of the present disclosure;
FIG. 9 shows a schematic diagram of a content processing scheme, exemplified by a navigation scenario;
FIG. 10 shows a schematic diagram of a content processing scheme, exemplified by a motion scene;
FIG. 11 schematically shows a block diagram of a content processing apparatus according to an exemplary embodiment of the present disclosure;
FIG. 12 schematically illustrates a block diagram of a content processing apparatus according to another exemplary embodiment of the present disclosure;
fig. 13 schematically shows a block diagram of a content processing apparatus according to still another exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the steps. For example, some steps may be decomposed, and some steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation. In addition, all of the following terms "first" and "second" are used for distinguishing purposes only and should not be construed as limiting the present disclosure.
FIG. 1 shows a schematic diagram of an exemplary architecture of a content processing system of an embodiment of the present disclosure. As shown in fig. 1, the content processing system may include a content transmission apparatus 11 and smart glasses 12.
The content transmitting apparatus 11 is an apparatus for transmitting content to the smart glasses 12. Where content in embodiments of the present disclosure may refer to media content, including but not limited to images, audio, and the like. The content transmitting device 11 may be any device capable of communicative connection with the smart glasses 12, including but not limited to a smart phone, a tablet computer, a smart watch, and the like.
In the embodiment of the present disclosure, the content transmitting device 11 may be configured to determine a target content to be transmitted, and in a case where the smart glasses 12 are determined based on the first communication method, transmit the target content to the smart glasses 12 through the second communication method. Generally, the second communication mode is a longer transmission distance than the first communication mode.
After the smart glasses 12 acquire the target content, the target content may be played.
For example, in the case where the target content contains image content, referring to fig. 1, the smart glasses 12 may include a content receiving unit 121, a light emitting unit 123, and an image display unit 125.
Specifically, the content receiving unit 121 may be configured to receive the image content transmitted by the content transmitting apparatus 11 based on the second communication method. The light emitting unit 123 may be used to play the image content. And the image display unit 123 may be used to display the image content played by the light emitting unit 123.
It is understood that in some embodiments, the light emitting unit 123 may comprise a light engine provided on the smart glasses 12, and the image display unit 123 may comprise a lens of the smart glasses 12.
For another example, in the case that the target content includes audio content, the smart glasses 12 may further include an audio playing unit 127, and the audio playing unit 127 is configured to play the audio content.
For another example, in the case where the target content includes both image content and audio content, the smart glasses 12 may control them to be played simultaneously or separately.
In addition, the target content transmitted by the content transmission device 11 may be content converted based on task information of a currently running task. In this case, referring to fig. 1, the smart glasses 12 may further include a task control unit 129.
Specifically, the task control unit 129 may be configured to generate a task control instruction in response to a task control operation by a user, and transmit the task control instruction to the content transmission apparatus 11. The content transmission apparatus 11 can control the task currently running in response to the task control instruction, for example, pause the task, start the task, terminate the task, and the like.
It should be noted that the content processing method according to the exemplary embodiment of the present disclosure is generally executed by the content transmission device 11, and accordingly, the content processing apparatus described below is generally configured in the content transmission device 11.
FIG. 2 shows a schematic diagram of an electronic device suitable for use in implementing exemplary embodiments of the present disclosure. The content transmitting apparatus of the exemplary embodiments of the present disclosure may be configured as in fig. 2. It should be noted that the electronic device shown in fig. 2 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
The electronic device of the present disclosure includes at least a processor and a memory for storing one or more programs, which when executed by the processor, cause the processor to implement the content processing method of the exemplary embodiments of the present disclosure.
Specifically, as shown in fig. 2, the electronic device 200 may include: a processor 210, an internal memory 221, an external memory interface 222, a Universal Serial Bus (USB) interface 230, a charging management Module 240, a power management Module 241, a battery 242, an antenna 1, an antenna 2, a mobile communication Module 250, a wireless communication Module 260, an audio Module 270, a speaker 271, a microphone 272, a microphone 273, an earphone interface 274, a sensor Module 280, a display 290, a camera Module 291, a pointer 292, a motor 293, a button 294, and a Subscriber Identity Module (SIM) card interface 295. The sensor module 280 may include a depth sensor, a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
It is to be understood that the illustrated structure of the embodiments of the present disclosure does not constitute a specific limitation to the electronic device 200. In other embodiments of the present disclosure, electronic device 200 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 210 may include one or more processing units, such as: the Processor 210 may include an Application Processor (AP), a modem Processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband Processor, and/or a Neural Network Processor (NPU), and the like. The different processing units may be separate devices or may be integrated into one or more processors. Additionally, a memory may be provided in processor 210 for storing instructions and data.
The wireless communication function of the electronic device 200 may be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, a modem processor, a baseband processor, and the like.
The mobile communication module 250 may provide a solution including 2G/3G/4G/5G wireless communication applied on the electronic device 200.
The Wireless Communication module 260 may provide a solution for Wireless Communication applied to the electronic device 200, including Wireless Local Area Networks (WLANs) (e.g., Wireless Fidelity (Wi-Fi) network), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Ultra Wide Band (UWB), Infrared (IR), and the like.
The electronic device 200 implements a display function through the GPU, the display screen 290, the application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 290 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 210 may include one or more GPUs that execute program instructions to generate or alter display information.
The electronic device 200 may implement a shooting function through the ISP, the camera module 291, the video codec, the GPU, the display screen 290, the application processor, and the like. In some embodiments, the electronic device 200 may include 1 or N camera modules 291, where N is a positive integer greater than 1, and if the electronic device 200 includes N cameras, one of the N cameras is a main camera.
Internal memory 221 may be used to store computer-executable program code, including instructions. The internal memory 221 may include a program storage area and a data storage area. The external memory interface 222 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 200.
The present disclosure also provides a computer-readable storage medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device.
A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable storage medium may transmit, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The computer-readable storage medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method as described in the embodiments below.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
Fig. 3 schematically shows a flow chart of a content processing method of an exemplary embodiment of the present disclosure. Referring to fig. 3, the content processing method may include the steps of:
and S32, determining target content to be sent.
In an exemplary embodiment of the present disclosure, the target content may refer to media content including, but not limited to, images, audio, and the like.
Referring to fig. 4, the type of the target content may include a picture, a video, a text, a symbol, an audio, and the like. Wherein, the pictures can comprise static images and dynamic images; video can be understood as a series of consecutive images; the characters can be presented in an image-based mode, and the characters can also comprise numbers; the symbol refers to a symbol carried by a computer system and can also be a symbol drawn by a user.
According to some embodiments of the present disclosure, a content transmitting device may store target content. In this case, the content transmission apparatus may select target content to be transmitted from the stored contents in response to a content selection operation by the user. For example, the user selects one or more photos from the album as the target content. For another example, the user selects one or more pieces of music from among stored music files as target content.
According to other embodiments of the present disclosure, the content sending device may obtain content from other devices or servers, and use the obtained content as target content to be sent.
According to further embodiments of the present disclosure, the target content is converted based on task information of a currently running task. Fig. 5 shows a process of determining the target content in this case.
In step S502, the content transmission apparatus may determine task information of a task currently running.
Specifically, the content sending device is provided with an application program for transmitting content, that is, the content sending device in the present disclosure is provided with an application program having associated smart glasses, and in this case, the application program can capture a task of a process in real time to determine corresponding task information.
For example, when the application of the smart glasses is installed or the configuration is modified later, a white list of the application may be constructed, for example, an application list may be popped up for the user to select the application of the white list. For applications belonging to the white list, applications of the smart glasses may obtain their process tasks.
In addition, a white list of tasks may also be configured, that is, not all tasks of the white list application can be crawled by the smart glasses application, but only tasks in the white list of tasks can be crawled by the smart glasses application. The present disclosure does not limit the white list of applications and the white list of tasks.
It should be noted that the task determined to be currently running may be a task currently presented on the interface of the content sending device, or may be a task that is not displayed on the interface and runs in the background. Such as a task of navigating the APP, a task of exercising the body, a task of paying the APP, etc.,
the task may be initiated in response to a task setting operation by a user on the application interface. Taking the navigation APP as an example, the task setting operation may be an operation of setting a destination and triggering start of navigation.
In step S504, the content transmission apparatus may convert the task information into the target content.
In one embodiment of the present disclosure, for simple tasks, such as for countdown tasks, the task information may be directly translated into target content. Specifically, the content configuration style of the task, that is, the appearance of the converted content presentation, including but not limited to font size, font color, image size, image color, image style, content presentation position, content presentation transparency, and the like, may be obtained. Then, the data of the task at present is combined with the content configuration style to generate the target content. Still taking the countdown as an example, the remaining time is combined with the corresponding content configuration style to generate the target content.
In another embodiment of the present disclosure, more task information is available, and not all the information used for generating the target content requires further information extraction.
Referring to fig. 6, first, feature information may be extracted from task information, where the feature information includes at least one feature item and feature data of the feature item. Taking the navigation task as an example, the task information may include current time, destination information, information of the entire navigation route, information of the remaining time of navigation, and the like. The generated target content may only need information of the current travel route, and does not concern information of other aspects, in this case, the current travel route may be used as a feature item, specific data of the current travel route may be used as feature data, and the feature data may include, for example, a name of a road where the current travel route is located, how much distance left to turn, and the like.
It should be noted that, for a task, which feature or features are to be extracted may be configured in advance by user customization, and the disclosure is not limited thereto.
Next, the content transmission device may determine a content configuration style corresponding to the feature item. Specifically, a mapping relationship between the feature item and the content configuration style may be pre-constructed, and the content configuration style corresponding to the feature item may be determined according to the pre-constructed mapping relationship. The content configuration style includes, but is not limited to, font size, font color, image size, image color, image style, content rendering position, content rendering transparency, and the like.
And then, generating target content by combining the feature data of the feature items and the content configuration styles corresponding to the feature items.
It should be understood that, during the process of the task performed by the content transmission apparatus, the content transmission apparatus may update the target content in real time using the generated feature data, in view of the fact that the feature data is generally changed, so as to make the target content consistent with the process of the task performed.
And S34, under the condition that the intelligent glasses are determined based on the first communication mode, the target content is sent to the intelligent glasses through the second communication mode, so that the intelligent glasses can play the target content.
In an exemplary embodiment of the present disclosure, the first communication method and the second communication method are different communication methods. For example, the first communication mode may be an NFC or UWB mode, and the second communication mode may be a bluetooth or WiFi P2P mode. In some embodiments, the transmission distance of the first communication means is smaller than the transmission distance of the second communication means.
The smart glasses can be determined through the first communication mode in response to an operation of the content sending device touching or approaching the smart glasses.
For example, the content transmitting device may be caused to determine the smart glasses in response to a user's touch-and-click operation with respect to the content transmitting device and the smart glasses. Specifically, the content sending device can acquire the device information of the smart glasses in a way that the NFC touches one touch pad, and the smart glasses can be determined through the device information.
In the present disclosure, the result of the content sending device determining the smart glasses may be used as a trigger condition for sending the target content, that is, when the smart glasses are determined, the target content is sent to the smart glasses, and the content sending device sends the target content to the smart glasses through the second communication method.
According to some embodiments of the present disclosure, the second communication mode of the content transmitting device and the smart glasses may be established after the smart glasses are determined. Specifically, when the smart glasses are detected, the content sending device obtains the device information of the smart glasses, and establishes a second communication mode with the smart glasses according to the device information of the smart glasses.
The device information of the smart glasses may include connection configuration information, such as a MAC address, an SSID, a password, and the like, used to establish the second communication mode, and the device information may be configured in an NFC chip of the smart glasses. Therefore, the content transmitting device can acquire the device information of the smart glasses based on the touch-and-click operation.
According to other embodiments of the present disclosure, the second communication mode between the content transmitting device and the smart glasses may be established in advance before the content transmitting device determines the smart glasses. For example, when the smart glasses are started, the second communication mode with the content transmitting device is established.
The content transmitting device may transmit the target content to the smart glasses through the second communication means, and thus, the smart glasses may play the target content for the user to see and/or hear.
Fig. 7 shows a block diagram of smart glasses according to an embodiment of the present disclosure.
Referring to fig. 7, taking the target content as the image content as an example, after receiving the target content, the content receiving unit 71 may convert the target content to generate a signal that can be played by the light emitting unit 72, and then forward the signal to the light emitting unit 72, and transmit the signal to the image display unit 73 by the light emitting unit 72, so that the user wearing the smart glasses can view the target content.
In addition, the content receiving unit 71 may also perform processes of filtering, denoising, and the like of the target content. The content receiving unit 71 may also be understood as a data grooming unit of smart glasses. The light emitting unit 72 includes a light engine. The image display unit 73 includes lenses on smart glasses, and all or part of the lenses may serve as an interface for displaying target content.
Fig. 7 illustrates the structure of the target content playing only on one side, however, in other embodiments, both lenses of the smart glasses may display the target content, or different portions of the target content, which is not limited by the present disclosure.
Although not shown in fig. 7, the smart glasses of the present disclosure may further include an audio playing unit for playing audio content that may be contained in the target content.
In addition, the smart glasses may further include a task control unit for generating a task control instruction in response to a task operation of the user and transmitting the task control instruction to the content transmitting apparatus so as to control the task.
Specifically, the task control unit may be disposed on a temple of the smart glasses, and may include a touch sensing module, so that a corresponding task control command may be generated in response to a sliding operation, a clicking operation, or the like of the user. Or, one or more keys of the entity can be configured on the glasses legs, so that the corresponding task control instruction can be generated in response to the operation of pressing the keys by the user.
In addition, the mapping relationship between the user's operation and the task control may be previously configured on the content transmission apparatus. For example, what operation corresponds to suspending the task, what operation corresponds to starting the task, and the like, which is not limited by the present disclosure.
And for the content sending equipment, the task can be controlled in response to the task control instruction sent by the intelligent glasses.
In addition to controlling tasks through the control operation of the smart glasses, the user may not be convenient for manual operation in consideration of some scenes such as sports fitness. In this case, a scheme based on a voice control task may be configured.
First, the content transmitting apparatus can acquire voice information. The voice information may be directly acquired by the content transmitting device, or may be acquired by the smart glasses and transmitted to the content transmitting device.
Next, the content transmitting apparatus may recognize the voice information and determine a keyword related to task control. Specifically, the keywords may be configured in advance, and the disclosure does not limit the process.
Then, the content transmission apparatus may control the task based on the keyword.
For example, when the user says "stop navigation", the navigation task of the content transmission apparatus is terminated. It is understood that the disappearance of the progress of the task results in the disappearance of the target content on the smart glasses.
An interactive process of content processing of the embodiment of the present disclosure will be explained with reference to fig. 8.
In step S802, the content transmission apparatus determines a target task to be transmitted. As described above, the target task may be target content generated based on the running task.
In step S804, when it is determined that the timing to transmit the target content is reached, the user performs a touch operation with respect to the content transmission apparatus and the smart glasses.
In step S806, the content transmitting apparatus transmits the target content to the smart glasses.
In step S808, the smart glasses may play the target content.
Thereafter, the control process of the task may also be performed through the smart glasses.
In step S810, the smart glasses generate a task control command in response to the task control operation.
In step S812, the smart glasses may transmit the task control instruction to the content transmitting apparatus.
In step S814, the content transmission and the device may control the task according to the task control instruction.
The following describes a content processing scheme of the present disclosure by taking a navigation scenario as an example.
Referring to fig. 9, after the user sets a destination on the cell phone 91 and initiates navigation, navigation information on the interface of the cell phone 91 may be presented. In a case where an application program associated with the smart glasses 92 runs on the mobile phone 91, the application program may acquire the navigation information from the process, and generate target content to be sent based on the navigation information.
In a case where the user needs to display the navigation information on the smart glasses 92, the user may perform a touch-and-click operation based on NFC with respect to the mobile phone 91 and the smart glasses 92, and in this case, the mobile phone 91 may transmit the target content to the smart glasses 92 through bluetooth. Thus, the target content can be displayed on the lenses of the smart glasses 92.
In one aspect, although FIG. 9 illustrates only one lens for presentation of information, the target content may be presented on both lenses, or different portions of the target content may be presented separately.
On the other hand, as can be seen from fig. 9, the target content presented on the smart glasses 92 may not be the entire navigation information. As can be seen from the content displayed on the interface of the mobile phone 91, the navigation information further includes at least navigation remaining time, information of a next road, and the like, and these pieces of information are selectively removed in the process of generating the target content by the mobile phone 91, which also considers that the glasses lens is smaller than the mobile phone interface, and the smart glasses also need to see a real road, so in some scheme strategies of the present disclosure, all navigation information may not be presented on the glasses lens. In addition, the specifically presented target content can be customized to meet the personalized requirements of different users.
As shown in fig. 9, information on the current road and the direction and distance of the next road may be displayed on the smart glasses 92. It is understood that as the user travels, the data of the target content may change, that is, the information displayed on the smart glasses 92 may also change.
The following describes the content processing scheme of the present disclosure by taking a motion scene as an example.
Referring to fig. 10, after the user makes a jogging setting on the cell phone 101, motion information on the cell phone 101 interface may be presented. When an application program associated with the smart glasses 102 runs on the mobile phone 101, the application program may acquire the motion information from a process and generate target content to be sent based on the motion information.
In a case where the user needs to display the motion information on the smart glasses 102, the user may perform a touch-and-click operation based on NFC with respect to the mobile phone 101 and the smart glasses 102, in which case the mobile phone 101 may transmit the target content to the smart glasses 102 through bluetooth. Thus, the target content can be displayed on the lenses of the smart glasses 102.
Similarly, in one aspect, the target content may also be presented on two lenses, or different portions of the target content may be presented separately.
On the other hand, the target content presented on the smart glasses 102 may not be all of the motion information. But only the currently set jogging state (shown in the image of a person running) and the number of kilometers completed. However, the specifically presented target content may be customized by the user, which is not limited by this disclosure.
In addition, as the user keeps jogging, the data of the target content may change, that is, the information displayed on the smart glasses 102 may also change. Referring to fig. 10, the displayed content may change, at least in terms of the number of kilometers completed.
In summary, according to the content processing method of the present disclosure, on one hand, the process of the user participation may only be that the content sending device and the smart glasses meet the communication distance of the first communication mode, so that the content sending device determines the smart glasses, and the rest of operations may be automatically implemented by the content sending device, so that for the user, the operation difficulty of content transmission is low, the implementation is easy, the convenience is high, and the user can control the content sending time; on the other hand, the scheme of the present disclosure provides a new scheme for transmitting content, where a communication result of a first communication mode is used as a trigger condition to control a content transmitting device to transmit content in a second communication mode, and it is assumed that if the first communication mode is inconvenient or incapable of transmitting content in a scene, the content may be transmitted to smart glasses through the scheme, thereby expanding an application scene of transmitting content to smart glasses; on the other hand, the content sent to the intelligent glasses by the method can be content generated by conversion based on the task, application scenes of the intelligent glasses are enriched, and use experience of users is greatly improved.
It should be noted that although the various steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that these steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
Further, the present exemplary embodiment also provides a content processing apparatus applied to a content transmission device.
Fig. 11 schematically shows a block diagram of a content processing apparatus of an exemplary embodiment of the present disclosure. Referring to fig. 11, a content processing apparatus 1100 according to an exemplary embodiment of the present disclosure may include a content determining module 1101 and a content transmitting module 1103.
Specifically, the content determining module 1101 may be configured to determine target content to be sent; the content sending module 1103 may be configured to send the target content to the smart glasses through the second communication method when the smart glasses are determined based on the first communication method, so that the smart glasses play the target content.
According to an exemplary embodiment of the present disclosure, the content sending module 1103 may be configured to perform: and responding to the operation of touching or approaching the intelligent glasses, and determining the intelligent glasses through a first communication mode.
According to an exemplary embodiment of the present disclosure, referring to fig. 12, the content processing apparatus 1200 may further include a communication establishing module 1201 compared to the content processing apparatus 1100.
In particular, the communication establishing module 1201 may be configured to perform: after the smart glasses are determined based on the first communication mode, acquiring equipment information of the smart glasses; and establishing a second communication mode with the intelligent glasses according to the equipment information of the intelligent glasses so as to send the target content to the intelligent glasses through the second communication mode.
According to an exemplary embodiment of the disclosure, the communication establishing module 1201 may be configured to perform: before the smart glasses are determined based on the first communication mode, a second communication mode with the smart glasses is established in advance.
According to an exemplary embodiment of the present disclosure, the content determination module 1101 may be configured to perform: determining task information of a task currently running; and converting the task information into target content.
According to an exemplary embodiment of the present disclosure, the content determination module 1101 may be further configured to perform: and responding to the task setting operation of the user on the application interface and starting the task.
According to an exemplary embodiment of the present disclosure, the process of the content determination module 1101 converting the task information into the target content may be configured to perform: extracting feature information from the task information, wherein the feature information comprises at least one feature item and feature data of the feature item; determining a content configuration style corresponding to the feature item; and generating target content by combining the feature data of the feature item and the content configuration style corresponding to the feature item.
According to an exemplary embodiment of the present disclosure, the content determination module 1101 may be further configured to perform: and updating the target content in real time by using the generated characteristic data in the process of executing the task so as to enable the target content to be consistent with the process of executing the task.
According to an exemplary embodiment of the present disclosure, referring to fig. 13, the content processing apparatus 1300 may further include a task control module 1301 in comparison to the content processing apparatus 1100.
Specifically, the task control module 1301 may be configured to perform: responding to a task control instruction sent by the intelligent glasses, and controlling a task; wherein the task control instruction is generated based on the control operation of the user for the intelligent glasses.
According to an example embodiment of the present disclosure, the task control module 1301 may be configured to perform: acquiring voice information; recognizing the voice information and determining keywords related to task control; and controlling the task based on the keywords.
Since each functional module of the content processing apparatus in the embodiment of the present disclosure is the same as that in the embodiment of the method described above, it is not described herein again.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) to execute the method according to the embodiments of the present disclosure.
Furthermore, the above-described figures are merely schematic illustrations of processes included in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (18)

1. A content processing method applied to a content transmission apparatus, comprising:
determining target content to be sent;
and under the condition that the smart glasses are determined based on the first communication mode, the target content is sent to the smart glasses through a second communication mode so that the smart glasses can play the target content.
2. The content processing method according to claim 1, wherein determining the smart glasses based on the first communication method includes:
and responding to the operation of touching or approaching the intelligent glasses, and determining the intelligent glasses through the first communication mode.
3. The content processing method according to claim 2, wherein after the smart glasses are determined based on the first communication method, the content processing method further comprises:
acquiring equipment information of the intelligent glasses;
and establishing the second communication mode with the intelligent glasses according to the equipment information of the intelligent glasses so as to send the target content to the intelligent glasses through the second communication mode.
4. The content processing method according to claim 2, wherein before the smart glasses are determined based on the first communication method, the content processing method further comprises:
the second communication mode with the smart glasses is established in advance.
5. The content processing method according to any one of claims 1 to 4, wherein determining the target content to be transmitted comprises:
determining task information of a task currently running;
and converting the task information into the target content.
6. The content processing method according to claim 5, wherein the content processing method further comprises:
and responding to the task setting operation of the user on the application interface and starting the task.
7. The content processing method of claim 5, wherein converting the task information into the target content comprises:
extracting feature information from the task information, wherein the feature information comprises at least one feature item and feature data of the feature item;
determining a content configuration style corresponding to the characteristic item;
and generating the target content by combining the feature data of the feature item and the content configuration style corresponding to the feature item.
8. The content processing method according to claim 7, wherein the content processing method further comprises:
and updating the target content in real time by using the generated characteristic data in the task execution process so as to enable the target content to be consistent with the task execution process.
9. The content processing method according to claim 8, wherein the content processing method further comprises:
responding to a task control instruction sent by the intelligent glasses, and controlling the task;
wherein the task control instruction is generated based on a control operation of a user for the smart glasses.
10. The content processing method according to claim 8, wherein the content processing method further comprises:
acquiring voice information;
recognizing the voice information and determining keywords related to task control;
controlling the task based on the keywords.
11. A content processing apparatus applied to a content transmission device, comprising:
the content determining module is used for determining target content to be sent;
and the content sending module is used for sending the target content to the intelligent glasses through a second communication mode under the condition that the intelligent glasses are determined based on the first communication mode so that the intelligent glasses can play the target content.
12. A content processing system, comprising:
the content sending equipment is used for determining target content to be sent, and sending the target content to the intelligent glasses through a second communication mode under the condition that the intelligent glasses are determined based on the first communication mode;
the intelligent glasses are used for playing the target content.
13. The content processing system according to claim 12, wherein the target content includes image content, wherein the smart glasses include:
a content receiving unit: for receiving the image content transmitted by the content transmitting apparatus based on the second communication method;
a light emitting unit for playing the image content;
and the image display unit is used for displaying the image content.
14. The content processing system of claim 13, wherein the target content further comprises audio content, wherein the smart glasses further comprise:
and the audio playing unit is used for playing the audio content.
15. The content processing system according to any one of claims 12 to 14, wherein the process of the content transmission device determining the target content is configured to perform: determining task information of a task currently running, and converting the task information into the target content.
16. The content processing system of claim 15, wherein the smart glasses further comprise:
and the task control unit is used for responding to the task control operation of the user, generating a task control instruction and sending the task control instruction to the content sending equipment so that the content sending equipment can control the task.
17. A computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing a content processing method according to any one of claims 1 to 10.
18. An electronic device, comprising:
a processor;
a memory for storing one or more programs which, when executed by the processor, cause the processor to implement the content processing method of any one of claims 1 to 10.
CN202110586615.7A 2021-05-27 2021-05-27 Content processing method, device, system, storage medium and electronic equipment Active CN113329375B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110586615.7A CN113329375B (en) 2021-05-27 2021-05-27 Content processing method, device, system, storage medium and electronic equipment
PCT/CN2022/077137 WO2022247363A1 (en) 2021-05-27 2022-02-21 Content processing method, apparatus, and system, storage medium, and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110586615.7A CN113329375B (en) 2021-05-27 2021-05-27 Content processing method, device, system, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN113329375A true CN113329375A (en) 2021-08-31
CN113329375B CN113329375B (en) 2023-06-27

Family

ID=77421927

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110586615.7A Active CN113329375B (en) 2021-05-27 2021-05-27 Content processing method, device, system, storage medium and electronic equipment

Country Status (2)

Country Link
CN (1) CN113329375B (en)
WO (1) WO2022247363A1 (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105162497A (en) * 2015-08-04 2015-12-16 天地融科技股份有限公司 Data transmission method, terminal, electronic signature device and system
CN105320279A (en) * 2014-07-31 2016-02-10 三星电子株式会社 Wearable glasses and method of providing content using the same
US20170108918A1 (en) * 2015-10-20 2017-04-20 Bragi GmbH Second Screen Devices Utilizing Data from Ear Worn Device System and Method
CN107979830A (en) * 2017-11-21 2018-05-01 出门问问信息科技有限公司 A kind of Bluetooth connecting method, device, equipment and the storage medium of intelligent back vision mirror
CN108600632A (en) * 2018-05-17 2018-09-28 Oppo(重庆)智能科技有限公司 It takes pictures reminding method, intelligent glasses and computer readable storage medium
CN109890012A (en) * 2018-12-29 2019-06-14 北京旷视科技有限公司 Data transmission method, device, system and storage medium
CN109996348A (en) * 2017-12-29 2019-07-09 中兴通讯股份有限公司 Method, system and the storage medium that intelligent glasses are interacted with smart machine
CN111367407A (en) * 2020-02-24 2020-07-03 Oppo(重庆)智能科技有限公司 Intelligent glasses interaction method, intelligent glasses interaction device and intelligent glasses
CN111479148A (en) * 2020-04-17 2020-07-31 Oppo广东移动通信有限公司 Wearable device, glasses terminal, processing terminal, data interaction method and medium
CN112130788A (en) * 2020-08-05 2020-12-25 华为技术有限公司 Content sharing method and device
CN112732217A (en) * 2020-12-30 2021-04-30 深圳增强现实技术有限公司 Information interaction method, terminal and storage medium of intelligent glasses for 5G messages
CN112817665A (en) * 2021-01-22 2021-05-18 北京小米移动软件有限公司 Equipment interaction method and device and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105760123A (en) * 2014-12-17 2016-07-13 中兴通讯股份有限公司 Glasses, display terminal as well as image display processing system and method
CN112269468A (en) * 2020-10-23 2021-01-26 深圳市恒必达电子科技有限公司 Bluetooth and 2.4G, WIFI connection-based human-computer interaction intelligent glasses, method and platform for acquiring cloud information

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105320279A (en) * 2014-07-31 2016-02-10 三星电子株式会社 Wearable glasses and method of providing content using the same
CN105162497A (en) * 2015-08-04 2015-12-16 天地融科技股份有限公司 Data transmission method, terminal, electronic signature device and system
US20170108918A1 (en) * 2015-10-20 2017-04-20 Bragi GmbH Second Screen Devices Utilizing Data from Ear Worn Device System and Method
CN107979830A (en) * 2017-11-21 2018-05-01 出门问问信息科技有限公司 A kind of Bluetooth connecting method, device, equipment and the storage medium of intelligent back vision mirror
CN109996348A (en) * 2017-12-29 2019-07-09 中兴通讯股份有限公司 Method, system and the storage medium that intelligent glasses are interacted with smart machine
CN108600632A (en) * 2018-05-17 2018-09-28 Oppo(重庆)智能科技有限公司 It takes pictures reminding method, intelligent glasses and computer readable storage medium
CN109890012A (en) * 2018-12-29 2019-06-14 北京旷视科技有限公司 Data transmission method, device, system and storage medium
CN111367407A (en) * 2020-02-24 2020-07-03 Oppo(重庆)智能科技有限公司 Intelligent glasses interaction method, intelligent glasses interaction device and intelligent glasses
CN111479148A (en) * 2020-04-17 2020-07-31 Oppo广东移动通信有限公司 Wearable device, glasses terminal, processing terminal, data interaction method and medium
CN112130788A (en) * 2020-08-05 2020-12-25 华为技术有限公司 Content sharing method and device
CN112732217A (en) * 2020-12-30 2021-04-30 深圳增强现实技术有限公司 Information interaction method, terminal and storage medium of intelligent glasses for 5G messages
CN112817665A (en) * 2021-01-22 2021-05-18 北京小米移动软件有限公司 Equipment interaction method and device and storage medium

Also Published As

Publication number Publication date
WO2022247363A1 (en) 2022-12-01
CN113329375B (en) 2023-06-27

Similar Documents

Publication Publication Date Title
CN112396679B (en) Virtual object display method and device, electronic equipment and medium
CN112199016B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112527174B (en) Information processing method and electronic equipment
KR20200101014A (en) Electronic device supporting recommendation and download of avatar
US20230308534A1 (en) Function Switching Entry Determining Method and Electronic Device
CN111432245B (en) Multimedia information playing control method, device, equipment and storage medium
CN112188461B (en) Control method and device of near field communication device, medium and electronic equipment
CN111339938A (en) Information interaction method, device, equipment and storage medium
US20220206582A1 (en) Media content items with haptic feedback augmentations
CN113238727A (en) Screen switching method and device, computer readable medium and electronic equipment
US20200236441A1 (en) Electronic device and method of providing content therefor
CN110515610B (en) Page drawing control method, device and equipment
CN113556423B (en) Information processing method, device, system, storage medium and electronic equipment
WO2024027819A1 (en) Image processing method and apparatus, device, and storage medium
CN111880647B (en) Three-dimensional interface control method and terminal
CN113190307A (en) Control adding method, device, equipment and storage medium
CN113329375B (en) Content processing method, device, system, storage medium and electronic equipment
CN114371904B (en) Data display method and device, mobile terminal and storage medium
CN114356529A (en) Image processing method and device, electronic equipment and storage medium
CN111770484B (en) Analog card switching method and device, computer readable medium and mobile terminal
CN114266305A (en) Object identification method and device, electronic equipment and storage medium
CN113407318A (en) Operating system switching method and device, computer readable medium and electronic equipment
CN114429506A (en) Image processing method, apparatus, device, storage medium, and program product
CN111524518A (en) Augmented reality processing method and device, storage medium and electronic equipment
CN112667321A (en) Quick application starting method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant