CN117478654A - Abnormality processing method, device and cooperative work system for image data transmission process - Google Patents

Abnormality processing method, device and cooperative work system for image data transmission process Download PDF

Info

Publication number
CN117478654A
CN117478654A CN202210859609.9A CN202210859609A CN117478654A CN 117478654 A CN117478654 A CN 117478654A CN 202210859609 A CN202210859609 A CN 202210859609A CN 117478654 A CN117478654 A CN 117478654A
Authority
CN
China
Prior art keywords
packet
image data
camera
image
tablet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210859609.9A
Other languages
Chinese (zh)
Inventor
滕智飞
李裕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202210859609.9A priority Critical patent/CN117478654A/en
Publication of CN117478654A publication Critical patent/CN117478654A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application provides an exception handling method, equipment and a cooperative work system in an image data transmission process. According to the method, the network small packet generated by the second electronic device is set to carry the packet identification information and the state identification information, so that the first electronic device which receives the network small packet does not analyze the network small packet, namely, the situation that specific content carried in a data body of the network small packet is not needed to be known, whether the network small packet is an abnormal feedback packet or not can be determined according to the packet identification information and the state identification information, and whether the network small packet is an abnormal packet or not can be known in advance, the image data requested at this time can not be displayed, countermeasures such as reconstructing an image data channel and re-acquiring the image data can be extracted.

Description

Abnormality processing method, device and cooperative work system for image data transmission process
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to an anomaly processing method and apparatus in an image data transmission process, and a collaborative system.
Background
With the development of communication technology, data transmission, such as image data transmission, can be performed between any two electronic devices according to service requirements. Considering that the image data is usually larger, in order to increase the transmission speed and avoid network congestion, the image data of a large-capacity photo, such as a photo of 100K-20M, is usually decomposed into a plurality of image data packets, and then the obtained plurality of image data packets are put into a queue to be transmitted, and then the image data packets are sequentially taken out from the queue to be transmitted for transmission.
However, in the conventional image data transmission, a party receiving image data cannot know whether or not a received data packet is abnormal without analyzing the image data packet. Therefore, the party receiving the image data must wait for the party sending the image data to send out all the image data packets, and analyze each image data packet before knowing whether the image data carried by the image data packet is normal. This not only results in the inability of the peer device to restore the image data, but also wastes device resources and network resources.
Disclosure of Invention
In order to solve the technical problems, the application provides an abnormality processing method, an abnormality processing device and a collaborative work system in an image data transmission process, which aim to enable electronic equipment to know in advance that image data requested at this time cannot be displayed under the condition that a data body of a received data packet is not analyzed, so that an image data channel can be rebuilt in time, and the image data can be acquired again.
In a first aspect, the present application provides an anomaly handling method in an image data transmission process. The method applies a first electronic device, comprising: receiving a network small packet sent by the second electronic equipment, wherein the network small packet carries packet identification information and state identification information; when the packet identification information indicates that the network packet is an abnormal feedback packet, analyzing a data body of the abnormal feedback packet, extracting an error code carried in the abnormal feedback packet, and reporting the error code to a target application; when the packet identification information indicates that the network packet is an image data packet, determining whether the image data packet is normal or not according to the state identification information; when the image data packet is determined to be abnormal according to the state identification information, deleting the analyzed image data, emptying the cache, and reporting the reason of the abnormality to the target application.
The first electronic device and the second electronic device may be the same type of electronic device, such as any one or more of mobile phones, tablet computers, and the like with strong processing capability.
The types of the first electronic device and the second electronic device may be different, for example, the first electronic device may be an electronic device with strong processing capability such as a mobile phone or a tablet, and the second electronic device may be an internet of things device.
For example, for a scenario in which the second electronic device is an internet of things device, the second electronic device may be a desk lamp, for example.
For convenience of explanation, the application takes the first electronic device as an example of electronic devices with strong processing capability such as a mobile phone and a tablet, and the second electronic device as an example of an internet of things device such as a desk lamp.
When the network packet is an image data packet, the network packet may be specifically divided into a first packet, a middle packet and a tail packet.
Illustratively, the packet identification information of the first packet is denoted, for example, by "10" mentioned below, the packet identification information of the middle packet is denoted, for example, by "00" mentioned below, and the packet identification information of the last packet is denoted, for example, by "01" mentioned below.
The packet identification information of the abnormal feedback packet is exemplified by, for example, "11" mentioned below.
For example, the status identification information indicating that the network packet is normal is indicated by, for example, "0" mentioned below, and the abnormality is indicated by "1" for the user.
Regarding the data structures of the first packet, the middle packet and the tail packet under different state identification information, the data structures of the data header are removed, and the structures of the identification fields and the data body can be seen in fig. 18a to 20b, which are not repeated here.
Regarding the data structure of the exception feedback packet removal data header, the structure of the identification field and the data body thereof can be referred to in fig. 21, and will not be described herein.
The operations of parsing the network packet, deleting the parsed image data, and reporting the exception are implemented, for example, by a network packet assembling module, and the operations of receiving the network packet and emptying the buffer are implemented, for example, by a network packet collecting module.
Therefore, the method sets the network small packet generated by the second electronic device to carry the packet identification information and the state identification information, so that the first electronic device receiving the network small packet can determine whether the network small packet is an abnormal feedback packet or not according to the packet identification information and the state identification information without analyzing the network small packet, namely without knowing the specific content carried in the data body of the network small packet, and can know in advance whether the image data of the request cannot be displayed, further extract countermeasures such as reconstructing an image data channel, re-acquiring the image data and the like.
According to a first aspect, a network packet includes a data header, an identification field, and a data body; the data head takes 12 bytes, and the packet identification field takes 1 byte; the content of binary data corresponding to the 0 th frame and the 1 st frame in the identification field after combination is packet identification information, and binary data corresponding to the 3 rd frame in the identification field is state identification information; the data body is used for storing image data or error codes. Therefore, by setting the network packet to follow the data structure, when the network packet is received, the first electronic device directly identifies the content in the 0-3 frames in the byte in which the identification field is located, and can quickly and accurately determine the attribute of the current network packet, such as an abnormal feedback packet or an image data packet, and whether the image data packet is normal or not.
According to the first aspect, or any implementation manner of the first aspect, when the packet identification information indicates that the network packet is an abnormal feedback packet, the data body occupies 128 bytes; the error code is filled in 0-3 bytes of the data body. Therefore, when the network packet is an abnormal feedback packet, the data body is set to only 128 bytes, so that the transmission pressure and the analysis pressure can be reduced, and the abnormality can be reported rapidly.
According to a first aspect, or any implementation manner of the first aspect, when the packet identification information indicates that the network packet is an abnormal feedback packet, analyzing a data body of the abnormal feedback packet, and extracting an error code carried in the abnormal feedback packet includes: analyzing the data body of the abnormal feedback packet, and extracting error codes from 0-3 bytes of the data body. Therefore, when the network packet is an abnormal feedback packet, the content is directly extracted from 0-3 bytes after the data body is analyzed, the content of other fields is not required to be concerned, and convenience and rapidness are realized.
According to a first aspect, or any implementation manner of the first aspect, when the packet identification information indicates that the network packet is an image data packet and the image data packet is a first packet, the data body includes a first data portion and a second data portion; the first data part is a reserved field and takes 128 bytes for carrying expansion information; the second data section is used for filling the image data. In this way, by distinguishing the reserved field from the image data field, different contents can be acquired from different parts according to the service requirement in the subsequent parsing.
According to a first aspect, or any implementation manner of the first aspect, the image data packet is a header packet; the method further comprises the steps of: when the image data packet is determined to be normal based on the state identification information, the image data packet is parsed, and the image data is extracted from the second data section.
According to a first aspect, or any implementation manner of the first aspect, when the packet identification information indicates that the network packet is an image data packet and the image data packet is a middle packet or a tail packet, the data body is filled with the image data from 0 bytes.
According to the first aspect, or any implementation manner of the first aspect, the image data packet is a middle packet or a tail packet; the method further comprises the steps of: and when the image data packet is determined to be normal according to the state identification information, analyzing the image data packet, and extracting the image data from the 0 bytes of the data body.
According to the first aspect, or any implementation manner of the first aspect, the method further includes: when a new tundish is received, judging whether the extracted image data is larger than a set threshold value or not; when the extracted image data is larger than a set threshold value, performing overrun marking on the new tundish, and discarding the new tundish; and deleting the extracted image data when receiving the tail packet or the new head packet, emptying the cache, and reporting the abnormal reason to the target application.
According to a first aspect, or any implementation manner of the first aspect, when a tail packet or a new head packet is received, deleting already extracted image data, and emptying a buffer, including: deleting the extracted image data when the tail packet is received, and emptying all cached image data packets; when a new first packet is received, deleting the extracted image data, and emptying all cached image data packets except the new first packet. Therefore, according to the different attributes of the received network small packets, such as the tail packet or the head packet, different emptying cache strategies are set, and deletion of the network small packets which need to be reserved can be avoided.
According to the first aspect, or any implementation manner of the first aspect, the method further includes: starting a timer when a photographing request is sent to the second electronic equipment; in the process of timing by the timer, receiving a network small packet sent by the second electronic equipment, and processing the network small packet according to packet identification information and state identification information in the network small packet; and in the process of processing the network small packet, when the time length counted by the timer reaches the set timeout time length, destroying the thread for processing the network small packet, and sending a destroying instruction to the second electronic equipment so as to enable the second electronic equipment to destroy an image data channel between the second electronic equipment and the first electronic equipment.
The processing of the network packet according to the packet identification information and the state identification information in the network packet is, for example:
when the packet identification information in the network packet indicates that the network packet is an abnormal feedback packet, analyzing a data body of the abnormal feedback packet, extracting an error code carried in the abnormal feedback packet, and reporting the error code to a target application; when the packet identification information in the network packet indicates that the network packet is an image data packet, determining whether the image data packet is normal or not according to the state identification information in the network packet; deleting the analyzed image data when the image data packet is abnormal according to the state identification information, emptying the cache, and reporting the cause of the abnormality to the target application; and when the image data packet is determined to be normal according to the state identification information, analyzing the image data packet, and extracting the image data carried in the image data packet.
In this way, the threads for processing the network small packet are automatically destroyed, and an destruction instruction is sent to the second electronic device, so that the second electronic device destroys the image data channel between the second electronic device and the first electronic device, and the resource occupation of the flat panel is reduced.
In a second aspect, the present application provides an electronic device. The electronic device includes: a memory and a processor, the memory and the processor coupled; the memory stores program instructions that, when executed by the processor, cause the electronic device to perform the instructions of the first aspect or of the method in any possible implementation of the first aspect.
Any implementation manner of the second aspect and the second aspect corresponds to any implementation manner of the first aspect and the first aspect, respectively. The technical effects corresponding to the second aspect and any implementation manner of the second aspect may be referred to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, which are not described herein.
In a third aspect, the present application provides a collaborative work collaboration. The system comprises: the first electronic device is used for executing the method instructions in the first aspect or any possible implementation manner of the first aspect, and the second electronic device is provided with a camera, the camera is used for collecting image data, and a target application in the first electronic device is bound with the second electronic device; the first electronic device is configured to: registering a virtual camera corresponding to the camera in the system, and sending a photographing request to the second electronic equipment by calling the virtual camera; the second electronic device is configured to: and according to the photographing request of the first electronic equipment, invoking a camera to photograph the image data, and sending the image data to the target application of the first electronic equipment for preview display.
According to a third aspect, the second electronic device is an internet of things device.
According to a third aspect, or any implementation manner of the third aspect, the internet of things device is a desk lamp, and the camera is used for downwardly collecting image data.
Any implementation manner of the third aspect and any implementation manner of the third aspect corresponds to any implementation manner of the first aspect and any implementation manner of the first aspect, respectively. The technical effects corresponding to the third aspect and any implementation manner of the third aspect may be referred to the technical effects corresponding to the first aspect and any implementation manner of the first aspect, which are not described herein.
In a fourth aspect, the present application provides a computer readable medium storing a computer program comprising instructions for performing the method of the first aspect or any possible implementation of the first aspect.
Any implementation manner of the fourth aspect and any implementation manner of the fourth aspect corresponds to any implementation manner of the first aspect and any implementation manner of the first aspect, respectively. Technical effects corresponding to any implementation manner of the fourth aspect may be referred to the technical effects corresponding to any implementation manner of the first aspect, and are not described herein.
In a fifth aspect, the present application provides a computer program product comprising a computer program comprising instructions for performing the method of the first aspect or any possible implementation of the first aspect.
Any implementation manner of the fifth aspect and any implementation manner of the fifth aspect corresponds to any implementation manner of the first aspect and any implementation manner of the first aspect, respectively. Technical effects corresponding to any implementation manner of the fifth aspect may be referred to the technical effects corresponding to any implementation manner of the first aspect, and are not described herein.
In a sixth aspect, the present application provides a chip comprising processing circuitry, a transceiver pin. Wherein the transceiver pin and the processing circuit communicate with each other via an internal connection path, the processing circuit performing the method of the first aspect or any one of the possible implementation manners of the first aspect to control the receiving pin to receive signals and to control the transmitting pin to transmit signals.
Any implementation manner of the sixth aspect corresponds to any implementation manner of the first aspect and the first aspect, respectively. Technical effects corresponding to any implementation manner of the sixth aspect may be referred to the technical effects corresponding to any implementation manner of the first aspect, and are not described herein.
Drawings
FIGS. 1 a-1 b are exemplary illustrations of an application scenario;
fig. 2a is a schematic diagram of a hardware structure of an exemplary electronic device;
FIG. 2b is a schematic diagram of a software architecture of an exemplary electronic device;
fig. 3a is a schematic diagram of a hardware structure of an exemplary illustrated internet of things device;
fig. 3b is a schematic software architecture diagram of an exemplary illustrated internet of things device;
FIG. 4a is a schematic diagram of module interaction according to an embodiment of the present disclosure;
FIG. 4b is a schematic diagram of module interaction according to an embodiment of the present disclosure;
FIGS. 5 a-5 b are exemplary illustrations of an application scenario;
FIG. 6a is a schematic diagram of module interaction according to an embodiment of the present disclosure;
FIG. 6b is a schematic diagram of module interaction according to an embodiment of the present disclosure;
7 a-7 b are exemplary illustrations of one application scenario;
fig. 8 is a schematic diagram of module interaction provided in an embodiment of the present application;
9 a-9 c are exemplary illustrations of one application scenario;
FIG. 10 is a schematic diagram of module interaction according to an embodiment of the present application
FIGS. 11 a-11 b are exemplary illustrations of an application scenario;
fig. 12 is a schematic diagram illustrating image data transmission between a tablet and a desk lamp;
Fig. 13 is a schematic diagram illustrating a data structure of an entire image data packet;
fig. 14 is a schematic diagram illustrating the structure of a header in an image data packet;
FIG. 15 is a schematic diagram of a data structure of an exemplary image data packet;
fig. 16 is a schematic diagram illustrating the structure of an RTP header in an image data packet;
FIG. 17 is a schematic diagram illustrating the structure of an identification field in an image data packet;
fig. 18a is a schematic diagram illustrating a structure when the first packet of the image data packet is a normal packet;
fig. 18b is a schematic diagram illustrating a structure when the first packet of the image data packet is an abnormal packet;
fig. 19a is a schematic diagram illustrating a structure of a tundish of an image data packet as a normal packet;
fig. 19b is a schematic diagram illustrating a structure when a tundish of an image data packet is an abnormal packet;
fig. 20a is a schematic diagram illustrating a structure when a tail packet of an exemplary image data packet is a normal packet;
fig. 20b is a schematic diagram illustrating a structure when a tail packet of an exemplary image data packet is an abnormal packet;
fig. 21 is a schematic diagram of the structure of an exemplary shown abnormal feedback packet;
FIG. 22 is a schematic diagram of exemplary block diagrams illustrating the implementation of image data processing on the desk lamp side and the tablet side;
Fig. 23 is a schematic view of an abnormality processing method when the desk lamp side is unable to obtain image data, which is exemplarily shown;
fig. 24 is a schematic view of an abnormality processing method when an abnormality occurs in the desk lamp side during transmission exemplarily shown;
FIG. 25 is a schematic view illustrating an abnormality processing method when a packet of image data received by a tablet side exceeds a set threshold;
fig. 26 is a schematic diagram illustrating an abnormality processing method when the time period for which the tablet side receives the image data packet exceeds the set time period.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone.
The terms first and second and the like in the description and in the claims of embodiments of the present application are used for distinguishing between different objects and not necessarily for describing a particular sequential order of objects. For example, the first target object and the second target object, etc., are used to distinguish between different target objects, and are not used to describe a particular order of target objects.
In the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" means two or more. For example, the plurality of processing units refers to two or more processing units; the plurality of systems means two or more systems.
In the description of the embodiments of the present application, unless otherwise indicated, the meaning of "a plurality" means two or more. For example, the plurality of processing units refers to two or more processing units; the plurality of systems means two or more systems.
With the development of the internet, online education is popular with more and more people, and users (such as students) have an increasing demand for online education. In some application scenarios, when a new word is encountered, students can perform online word searching to obtain relevant interpretation; in some application scenarios, online reading of book content is more convenient for students to learn knowledge and pronunciation; in some application scenarios, students need to submit their jobs online. Therefore, how to meet the online education needs of users based on intelligent devices is a problem to be solved.
At present, aiming at an online education scene, a user usually uses an intelligent learning machine with a shooting function and a display function, and the intelligent learning machine needs a camera or a reflecting mirror at a special position to shoot books, so that the universality and the usability are not strong. In addition, the equipment with the shooting function and the display function needs stronger hardware and system support, and the equipment cost is higher. Furthermore, how to provide better online education experience for users based on intelligent equipment, improves universality and usability, reduces online education cost and is a problem to be solved.
The collaborative work system provided by the embodiment of the application can be applied to an online education scene. The system working system comprises electronic equipment and a desk lamp which are in communication connection, wherein a camera is arranged on the desk lamp and can be used for shooting books downwards. The electronic equipment invokes the camera of the desk lamp to collect images and combines the platform online education resources to meet the online education requirements of users. The electronic device may be a tablet computer or a mobile phone. In addition to the online educational scenario, the electronic device and desk lamp may also serve users based on their respective basic functions (i.e., communication function and lighting function). Therefore, the system working system can create better online education experience for the user based on two intelligent devices commonly used by the user, and has strong universality and usability. In addition, as the tablet personal computer or the mobile phone and the like are necessary products of the household, the cost display of the desk lamp equipment with the shooting function is lower than that of the equipment with the shooting function and the display function, and the online education cost of the user is greatly reduced.
The technical scheme provided by the application is explained below by taking electronic equipment as a flat plate as an example.
Fig. 1a shows an exemplary application scenario. As shown in fig. 1a, the co-operating system comprises a tablet 100 and a desk lamp 200 which establish a communication connection. The desk lamp 200 includes a camera 201 for capturing images downwards, for example, for capturing text or picture contents in a book downwards. An education APP (Application) is installed in the tablet 100, and the education APP may call the camera 201 of the desk lamp 200 to collect images, and provide various online education functions, such as online word checking, online reading, online submission, etc., for the user according to the images collected by the camera 201 of the desk lamp 200.
Although the tablet is also provided with the front camera and the rear camera, no matter which camera is used for shooting books, the user is required to hold the tablet by hand to aim the cameras at the books, so that the shooting is unstable, the finger reading and the point reading operations executed by the user by hands are affected, and better online education experience cannot be provided for the user. As shown in fig. 1a, the tablet 100 and the desk lamp 200 can be placed at fixed positions, the tablet 100 uses the camera 201 of the desk lamp 200 to shoot a book, and the shot images are stable, so that the success rate of content identification is high, and a user can flexibly perform finger reading and point reading operations in the book. Therefore, the linkage of the tablet computer and the table lamp can provide a better online education experience for the user.
As shown in fig. 1b, the tablet 100 and the desk lamp 200 may perform near field communication or far field communication. The near field communication can complete information interaction among devices through the router and other devices, and the far field communication can complete information interaction among devices through the cloud server. Illustratively, the tablet 100 and the desk lamp 200 may implement near field communication based on Wi-Fi (wireless fidelity ) network protocols or the like.
Fig. 2a is a schematic structural diagram of the electronic device 100. Alternatively, the electronic device 100 may be a terminal, which may also be referred to as a terminal device, and the terminal may be a cellular phone (cellular phone) or a tablet computer (pad), which is not limited in this application. It should be noted that the schematic structural diagram of the electronic device 100 may be applied to the flat panel in fig. 1a to 1 b. It should be understood that the electronic device 100 shown in fig. 2a is only one example of an electronic device, and that the electronic device 100 may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in fig. 2a may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The electronic device 100 may include: processor 110, external memory interface 120, internal memory 121, universal serial bus (universal serial bus, USB) interface 130, charge management module 140, power management module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headset interface 170D, sensor module 180, keys 190, motor 191, indicator 192, camera 193, display 194, and subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor, a gyroscope sensor, an acceleration sensor, a temperature sensor, a motion sensor, a barometric sensor, a magnetic sensor, a distance sensor, a proximity sensor, a fingerprint sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the electronic device 100, among others. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transfer data between the electronic device 100 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 and provides power to the processor 110, the internal memory 121, the external memory, the display 194, the camera 193, the wireless communication module 160, and the like.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wi-Fi network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., as applied on the electronic device 100.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques.
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
In the embodiment of the present application, the display screen 194 may display a photographing preview interface, a photographing image interface, and the like. It should be noted that, in the embodiment of the present application, the shooting preview interface refers to an interface where a user can view an image collected by the desk lamp camera in real time through the display screen 194.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions.
The internal memory 121 may be used to store computer executable program code including instructions. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121, for example, to cause the electronic device 100 to implement a cooperative method in the embodiments of the present application. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 100 may listen to music, or to hands-free conversations, through the speaker 170A. In some embodiments, the electronic device 100 may be provided with a plurality of speakers 170A.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When electronic device 100 is answering a telephone call or voice message, voice may be received by placing receiver 170B in close proximity to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may also be provided with three, four, or more microphones 170C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor is used for sensing a pressure signal and can convert the pressure signal into an electric signal. In some embodiments, the pressure sensor may be provided on the display screen 194. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor.
Touch sensors, also known as "touch panels". The touch sensor may be disposed on the display screen 194, and the touch sensor and the display screen 194 form a touch screen, which is also referred to as a "touch screen". The touch sensor is used to detect a touch operation acting on or near it. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type.
The keys 190 include a power-on key (or power key), a volume key, etc. The keys 190 may be mechanical keys or touch keys. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The software system of the electronic device 100 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In this embodiment, taking an Android system with a layered architecture as an example, a software structure of the electronic device 100 is illustrated.
Fig. 2b is a software architecture block diagram of the electronic device 100 according to an embodiment of the present application.
The layered architecture of the electronic device 100 divides the software into several layers, each with a distinct role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, a system layer, a HAL layer (Hardware Abstract Layer, hardware abstraction layer), and a kernel layer, respectively.
The application layer may include a series of application packages.
As shown in fig. 2b, the application packages may include conversations, video, bluetooth, camera, WLAN, educational applications, device manager applications, and the like. The application packages may also include calendar, map, navigation, music, short messages, etc. applications.
Among other things, the educational application may be used to provide online educational functions for the user, such as online word recognition, online reading aloud, online submission jobs, and the like.
In some examples, a device manager application may be used to bind IOT (Internet of Things ) devices such as desk lamps. In some examples, the educational application may enable binding of IOT (Internet of Things ) devices such as desk lamps.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 2b, the application framework layer may include a camera service, an authentication service, a hardware virtualization service, a device management service, a transmission management service, and the like.
Among other things, a camera service (camera service) may be used to invoke a camera (including a front-facing camera and/or a rear-facing camera) in response to a request of an application.
In the embodiment of the application, the camera service may be used for calling the virtual camera at the electronic device side, that is, calling the camera in the IOT device, in response to a request of the application.
Authentication services are used to provide secure rights management capabilities.
A hardware virtualization service may be used to establish a logical channel between the electronic device side (i.e., the center device side) and the IOT device side, providing the ability to virtualize a camera.
The device management service can be used for discovering and managing the IOT devices and providing far-field (i.e. cloud) IOT device information and near-field (i.e. near-field connectable) IOT device information for application programs such as education applications.
The transmission management service can be used for establishing a physical transmission channel and providing data transmission capability.
In addition, a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, etc. may be included.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc. The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture. The telephony manager is used to provide the communication functions of the electronic device 100. Such as the management of call status (including on, hung-up, etc.). The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like. The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction.
The system library and Runtime layer (i.e., system layer) includes a system library and Android Runtime (Android run time).
Android run time (Android Runtime) includes a core library and virtual machines. Android run is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android. The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
In the embodiment of the application, the Android run time further comprises a virtual camera adaptation layer, and the virtual camera registration capability is provided.
The system library in the system layer may include a plurality of functional modules. For example: multimedia platform, graphic image processing library, codec, etc.
The multimedia platform can be used for managing multimedia and supporting various common audio, video format playback and recording, still image files and the like. The multimedia platform may support a variety of audio and video coding formats, such as: MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
Graphics image processing libraries may be used to implement graphics drawing, image rendering, compositing, and layer processing, among others.
The codec may be used to implement codec operations on audio data, video data.
The HAL layer is an interface layer between the operating system kernel and the hardware circuitry. HAL layers include, but are not limited to: audio HAL, sensor HAL, modem HAL, camera HAL, virtual camera HAL.
Wherein the audio HAL is used for processing the audio stream, for example, noise reduction, directional enhancement, etc. of the audio stream. The camera HAL is used for processing the image stream corresponding to the camera at the electronic equipment side, and the virtual camera HAL is used for processing the image stream corresponding to the virtual camera registered at the electronic equipment side, namely, the image stream acquired by the camera at the IOT equipment side.
The kernel layer is a layer between hardware and software. The kernel layer at least comprises a display driver, a camera area, an audio driver, a network driver (such as Wi-Fi driver), a CPU driver, a USB driver, a storage driver, a print driver, and the like. The hardware at least comprises a processor, a display screen, a Wi-Fi module and the like.
It will be appreciated that the layers and components contained in the layers in the software structure shown in fig. 2b do not constitute a specific limitation of the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer layers than shown, and more or fewer components may be included in each layer, as the present application is not limited.
Fig. 3a is a schematic diagram of a hardware structure of the internet of things device 200. It should be noted that the schematic structural diagram of the internet of things device 200 may be applicable to the desk lamp in fig. 1a to 1 b. It should be understood that the internet of things device 200 shown in fig. 3a is only one example of an electronic device, and that the internet of things device 200 may have more or fewer components than shown in the figures, may combine two or more components, or may have different configurations of components. The various components shown in fig. 3a may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The internet of things device 200 may include: processor 210, camera 201, wireless communication module 202, memory 203, audio module 204, usb interface 205, charge management module 206, power management module 207, battery 208, lighting device 209, keys 211, etc.
Processor 210 may include one or more processing units such as, for example: the processor 210 may include a GPU, ISP, controller, memory, video codec, etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and a command center of the internet of things device 200. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
The camera 201 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. Taking a desk lamp as an example, the camera 201 may be disposed on a desk lamp stand for capturing images downwards.
The internet of things device 200 may implement a photographing function through an ISP, a camera 201, a video codec, a GPU, etc.
The ISP is used to process the data fed back by the camera 201. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, an ISP may be provided in the camera 201.
The wireless communication module 202 may provide solutions for wireless communication including WLAN (e.g., wi-Fi network), bluetooth (BT), etc. for use on the internet of things device 200. In some embodiments, the antenna of the internet of things device 200 and the wireless communication module 202 are coupled such that the internet of things device 200 can communicate with a network and other devices through wireless communication techniques.
Memory 203 may be used to store computer executable program code that includes instructions. The processor 210 executes the instructions stored in the memory 203, thereby performing various functional applications and data processing of the internet of things device 200, for example, to enable the internet of things device 200 to implement the cooperative working method in the embodiments of the present application.
The internet of things device 200 may implement audio functions, such as music playing, etc., through the audio module 204, the speaker 212, etc.
The USB interface 205 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 205 may be used to connect a charger to charge the internet of things device 200, or may be used to transfer data between the internet of things device 200 and a peripheral device.
The charge management module 206 is configured to receive a charge input from a charger. The charging management module 206 may also supply power to the internet of things device 200 through the power management module 207 while charging the battery 208.
The power management module 207 is used to connect the battery 208, the charge management module 206 and the processor 210. The power management module 207 receives input from the battery 208 and/or the charge management module 206 and provides power to the processor 210, the memory 203, the camera 201, the wireless communication module 202, the lighting device 209, and the like.
The keys 211 include a power-on key (or power key), and the like.
The software system of the internet of things device 200 may employ a layered architecture or other architecture, etc. The embodiment of the application takes a layered architecture as an example, and illustrates a software structure of the internet of things device 200.
Fig. 3b is a software architecture block diagram of the internet of things device 200 according to an embodiment of the present application.
The layered architecture of the internet of things device 200 divides the software into several layers, each of which has a distinct role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the system of the internet of things device 200 is divided into three layers, from top to bottom, an application layer, an application framework layer, a system layer, and a kernel layer, respectively.
As shown in fig. 3b, the application layer may include a device application service, which may be understood as a system level application, and the device application service is started after the system of the internet of things device 200 is started.
As shown in fig. 3b, the application framework layer may include device interconnection services, hardware abstraction services, resource managers, and the like.
The device interconnection service can be used for establishing a physical transmission channel, providing data transmission capability and managing a starting switch of the hardware abstraction service.
The hardware abstraction service may be used to establish a logical channel between the electronic device side (i.e., the hub device side) and the IOT device, provide the ability to virtualize the camera, and provide a camera open interface for the IOT device.
The resource manager may provide various resources to the application.
As shown in fig. 3b, the system layer may include a multimedia platform, a graphic image processing library, a codec, a device adaptation module, and the like.
The multimedia platform can be used for managing multimedia and supporting various common audio, video, still image files and the like. The multimedia platform may support a variety of audio and video coding formats, such as: MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
Graphics image processing libraries may be used to implement graphics drawing, image rendering, compositing, and layer processing, among others.
The codec may be used to implement codec operations on audio data, video data.
The device adaptation module can realize an interface of hardware abstraction service, can provide device information and capability query, and can also provide functions of executing related operations on the IOT device side, such as functions of opening a camera, photographing, previewing and the like.
It may be appreciated that, in order to implement the cooperative method in the embodiments of the present application, the electronic device 100 and the internet of things device 200 include corresponding hardware and/or software modules that perform the respective functions. The steps of an algorithm for each example described in connection with the embodiments disclosed herein may be embodied in hardware or a combination of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application in conjunction with the embodiments, but such implementation is not to be considered as outside the scope of this application.
FIG. 4a is a schematic diagram showing the interaction of the modules. Referring to fig. 4a, the embodiment of the present application provides a method flow for cooperative work of a tablet and a desk lamp, specifically including:
0. device service initialization phase
S0.1, responding to user operation, starting the device application service of the desk lamp, and loading the device interconnection service.
For example, the user operation may be an operation in which the user turns on the desk lamp power supply. And responding to the user operation, starting the desk lamp system, starting the equipment application service, and loading the equipment interconnection service. The device interconnection service may be used to establish a physical transmission channel between the tablet and the desk lamp, and to provide data transmission capability.
S0.2, loading hardware abstraction service by the equipment interconnection service of the desk lamp.
The device interconnection service may also control the opening of the hardware abstraction service. For example, after the device interconnection service is started, the device interconnection service may load the hardware abstraction service in the form of a plug-in. The hardware abstraction service can be used for establishing a logic channel between the tablet and the desk lamp, providing the capability of virtualizing the camera, and simultaneously providing an open interface of the desk lamp camera.
Referring to the block interaction diagram shown in FIG. 4b, the hardware abstraction service may include at least a base component and a camera component. In the device service initialization stage, the device interconnection service loads the base component first, and initializes the base component. After the basic component is initialized, information interaction can be carried out with the equipment adaptation module of the desk lamp, and equipment information and virtualization capability information are obtained. Exemplary device information includes, but is not limited to, device name, device identification, device type, and the like. Exemplary virtualization capability information includes, but is not limited to, whether a virtualized camera is supported, whether a virtualized microphone is supported, and the like.
The desk lamp has the capability of supporting the virtualized camera, and can be understood that the camera of the desk lamp allows other electronic devices (such as a tablet) to call, namely, the camera of the desk lamp can be understood as a virtual camera of other electronic devices.
After the base component obtains the device information and the capability information of the desk lamp, if the desk lamp has the capability of supporting the virtualized camera, the base component loads the camera component to provide the capability of the virtualized camera. At this time, the base component may prepare for negotiation channel establishment to negotiate network connection related information (including but not limited to IP address and port, etc.) with the tablet to establish the negotiation channel. When the base component performs negotiation channel establishment preparation, a Session service (Session Server) is created, and a Session Name (Session Name) of the Session service is sent to the device interconnection service, so that a negotiation channel is established between the transmission management service on the tablet side and the device interconnection service on the desk lamp side.
1. Device discovery phase
S1.1, responding to user operation, and transmitting a device discovery instruction to a device management service by the education APP of the tablet.
The user operation may be an operation that the user clicks a function option in the education APP that needs to call the virtual camera. For example, the user may click on the operations of the click-to-read function, the word search function, the job function, and the photographing function in the education APP.
The education APP of the tablet receives the user operation, and in response to the operation, transmits a device discovery instruction to the device management service of the tablet. The device discovery instruction is used for indicating to search the IOT device capable of establishing connection with the tablet. By way of example, the device discovery instructions may include, but are not limited to, an instruction type and a device type to be discovered. In this embodiment, the device discovery instruction is specifically configured to find a desk lamp that can establish a connection with the tablet.
S1.2, the device management service in the tablet invokes an authentication service to authenticate the education APP, and an authentication result of the education APP is obtained.
After receiving the device discovery instruction, the device management service can acquire the name (or identification) of the education APP based on the existing mechanism of the android system, and perform APP authentication on the education APP according to the name of the education APP. The device management service can invoke the authentication service to authenticate the education APP so as to obtain an authentication result of the education APP.
In this embodiment, the tablet side application framework layer is further provided with a device management API corresponding to the device management service, and a hardware virtualization API corresponding to the hardware virtualization service. In order to implement the technical solution provided in this embodiment, the education APP needs to register on a related platform (for example, a platform provided by a tablet manufacturer), adapt a framework of a device management service, a hardware virtualization service, and a transmission management service, and apply for rights of a device management API and a hardware virtualization API.
Illustratively, the authentication service accesses an authentication server to authenticate the educational APP through the authentication server, including, but not limited to, whether the authentication is registered on the relevant platform, whether the relevant framework is adapted, and whether the relevant API rights are applied.
Also exemplary, the authentication service may authenticate the educational APP according to a local whitelist.
And obtaining an authentication result (authentication success or authentication failure) of the education APP at the authentication service, and sending the authentication result to the equipment management service.
S1.3, the device management service in the tablet transmits a device search instruction to the transmission management service when the education APP authentication is successful.
If the education APP authentication is successful, the device management service sends a device search instruction to the transmission management service. The device search instruction may include, but is not limited to, an instruction type, a device type to be searched, and a search mode. Exemplary search means include, but are not limited to, near field device scanning and obtaining device information from a cloud server. In this embodiment, the device to be searched is a desk lamp.
S1.4, the transmission management service in the tablet acquires a far-near field device list according to the device search instruction, and sends the far-near field device list to the device management service.
The near-far field device list includes a far-field device list and a near-field device list. The far-field devices included in the far-field device list refer to registered devices acquired from the cloud server, and the near-field devices included in the near-field device list refer to devices scanned through near-field communication. In the far field device list and the near field device list, the device information includes, but is not limited to, a device name, a device identification, a device type, and the like.
When the transmission management service receives the device searching command, performing related device searching operations, such as performing near-field device scanning operations and acquiring related device information in the cloud server, according to the device type to be searched and the searching mode carried in the device searching command, obtaining a far-field device list and a near-field device list, and sending the far-field device list and the near-field device list to the device management service.
And S1.5, performing device filtering by the device management service in the tablet according to the far-near field device list, and reporting the device information obtained after filtering to the education APP.
The device management service performs device filtering according to the far-field device list and the near-field device list, determines desk lamp information which can be linked with the tablet, and sends the desk lamp information to the education APP. The device management service may perform an intersection operation on the far-field device list and the near-field device list, filter out the table lamps only existing in the far-field device list or only in the near-field device list, and use the table lamps existing in the far-field device list and the near-field device list as table lamp information capable of being linked with the tablet. Thus, the device management service can filter out the desk lamp which is not registered in the cloud server, and also can filter out the desk lamp which cannot perform near field communication with the tablet.
In another alternative embodiment, if the tablet and the desk lamp are under the same lan, the transmission management service of the tablet may obtain the communication device list and the registered device list according to the device search instruction. The devices included in the communication device list refer to devices scanned through near field communication or far field communication, and the devices included in the registration device list refer to registered devices acquired from a cloud server. The near field devices included in the near field device list refer to devices scanned by near field communication. In the communication device list and the registered device list, the device information includes, but is not limited to, a device name, a device identification, a device type, and the like.
The transmission management service in the tablet sends the communication equipment list and the registration equipment list to the equipment management service, and the equipment management service performs equipment filtering according to the communication equipment list and the registration equipment list and reports the equipment information obtained after filtering to the education APP. The device management service may perform an intersection operation on the communication device list and the registration device list, filter out the table lamps only existing in the communication device list or only existing in the registration device list, and use the table lamps existing in both the communication device list and the registration device list as table lamp information capable of being linked with the tablet. Thus, the device management service can filter out the desk lamp which is not registered in the cloud server, and also can filter out the desk lamp which cannot perform near field communication with the tablet.
2. Virtual camera enabled phase
S2.1, determining a desk lamp to be linked by the education APP of the flat plate.
The number of the table lamps which can be linked with the tablet, which are filtered by the device management service, can be one or more. When the number of the desk lamps is one, the education APP defaults to be used as the desk lamp to be linked; when the number of the desk lamps is multiple, the education APP can display a desk lamp list to be linked for the user, so that the user can select the desk lamps in the list, and the education APP can respond to the selection operation of the user to select the desk lamps to be linked.
It should be noted that, the step of determining the desk lamp to be linked by the education APP may also be divided into a device discovery stage, which is not limited in this embodiment.
S2.2, performing equipment verification and equipment connection on the desk lamp by the aid of the education APP of the tablet computer, and obtaining that the desk lamp has the capability of supporting a virtualized camera.
S2.3, the education APP of the tablet sends a virtual camera enabling request to the hardware virtualization service.
The education APP of the tablet sends a virtual camera enabling request to the hardware virtualization service after obtaining that the desk lamp has the capability of supporting the virtualized camera. Wherein the virtual camera enable request is for indicating that the virtual camera is registered in the virtual camera HAL. The virtual camera enable request may include, but is not limited to, a request type, a device name, a device identification, a device type, and an identification of the virtual camera.
S2.4, the hardware virtualization service of the tablet registers the virtual camera with the virtual camera HAL.
After receiving the virtual camera enabling request, the hardware virtualization service registers the corresponding virtual camera with the virtual camera HAL according to the virtual camera enabling request.
S2.5, the tablet' S virtual camera HAL sends a virtual camera enable success indication to the educational APP after the virtual camera registration is completed.
The flow of the virtual camera enabled phase is explained in detail below with reference to the schematic interaction diagram of the modules shown in fig. 4 b. Referring to fig. 4b, the flow of the virtual camera enable phase mainly includes a device check sub-phase (S301-S309), a device connection sub-phase (S310-S321), a device service capability request sub-phase (S322-S325), and a virtual camera enable sub-phase (S326-S331).
Referring to fig. 4b, the process of the virtual camera enabling phase specifically includes the following steps:
s301, the education APP in the tablet sends a virtual camera enabling instruction to the hardware virtualization API.
The virtual camera enabling instruction is used for indicating to enable the virtual camera, and the virtual camera enabling instruction can include but is not limited to an instruction type, a device name, a device identifier and a device type.
S302, after receiving the virtual camera enabling instruction, the hardware virtualization API in the tablet sends a device checking instruction to an interface scheduling module of the hardware virtualization service.
And the device verification instruction is used for indicating to verify the device information carried in the virtual camera enabling instruction. The device verification instruction may include, but is not limited to, an instruction type, a device name, a device identifier, and a device type.
S303, an interface scheduling module of the hardware virtualization service in the tablet sends an APP authentication instruction to a right management module of the hardware virtualization service.
After receiving the device checking instruction, the interface scheduling module of the hardware virtualization service firstly sends an APP authentication instruction to the authority management module of the hardware virtualization service so as to authenticate the APP initiating the virtual camera enabling instruction. The APP authentication instruction may include, but is not limited to, a name of the APP.
S304, performing APP authentication on the education APP by the permission management module of the hardware virtualization service in the tablet.
Illustratively, the rights management module may access an authentication server to authenticate the educational APP through the authentication server, including, but not limited to, whether the authentication is registered on the relevant platform, whether the relevant framework is adapted, and whether the relevant API rights are applied. The rights management module may access the authentication server through the authentication service, which is not limited in this embodiment.
S305, when the authority management module of the hardware virtualization service in the tablet is successful in authenticating the education APP, an authentication success indication is sent to the interface scheduling module.
After the authority management module obtains the authentication result of the education APP, if the education APP is successfully authenticated, an authentication success indication is sent to the interface scheduling module, and if the education APP is failed in authentication, an authentication failure indication is sent to the hardware virtualization API, so that the hardware virtualization API returns the indication information of the lack of authority of the APP to the education APP according to the authentication failure indication.
S306, when the interface scheduling module of the hardware virtualization service in the tablet determines that the education APP authentication is successful, a device verification instruction is sent to the device management module.
And the interface scheduling module of the hardware virtualization service receives the authentication success indication, and sends a device verification instruction to the device management module after determining that the education APP is successfully authenticated. The device verification instruction is used for performing state verification on the device to be linked, and is specifically used for performing state verification on the desk lamp to be linked in the embodiment. Exemplary device verification instructions may include, but are not limited to, instruction type, device name, device identification, device type.
S307, the device management module of the hardware virtualization service in the tablet sends a device information inquiry instruction to the device profile module of the device management service.
Wherein, the equipment profile module of the equipment management service stores the equipment information of the current online.
After receiving the device verification instruction, the device management module of the hardware virtualization service sends a device information inquiry instruction to the device profile module of the device management service, wherein the device information inquiry instruction can include, but is not limited to, a device name, a device identifier and a device type.
S308, the device profile module of the device management service in the tablet returns device information to the device management module of the hardware virtualization service.
And the device profile module of the device management service returns the device information to the device management module of the hardware virtualization service if the corresponding device is queried according to the device information query instruction. Among the returned device information may include, but is not limited to, device name, device identification, device type, and presence status.
And the device profile module of the device management service returns a null value to the device management module of the hardware virtualization service according to the device information inquiry instruction if the corresponding device is not inquired so as to indicate that the corresponding device is not inquired. At this time, the device management module of the hardware virtualization service may send a device verification failure indication to the hardware virtualization API, so that the hardware virtualization API returns, to the education APP, indication information of the device verification failure according to the device verification failure indication.
S309, after receiving the device information, the device management module of the hardware virtualization service in the tablet sends a device verification success indication to the hardware virtualization API.
And how the equipment management module of the hardware virtualization service receives the equipment information returned by the equipment profile module of the equipment management service, sending an equipment verification success indication to the hardware virtualization API so as to indicate that the verification of the desk lamp to be linked is successful.
S310, the hardware virtualization API in the tablet sends a device connection request to a device management module of the hardware virtualization service.
After confirming that the verification of the desk lamp to be linked is successful, the hardware virtualization API sends a device connection request to a device management module of the hardware virtualization service. The device connection request is used for indicating that network connection is established with the device to be linked, and in this embodiment, the device connection request is specifically used for indicating that network connection is established with the desk lamp to be linked. The device connection request may include, but is not limited to, a request type, a device name, a device identification, and a device type.
S311, after receiving the device connection request, the device management module of the hardware virtualization service executes a negotiation channel setup preparation operation and sends a negotiation channel opening request to the transmission management service.
After receiving the device connection request, the device management module of the hardware virtualization service prepares to negotiate the channel. When the device management module prepares to negotiate a channel, a Session Server is created, and a Session Name of the Session service is sent to the transmission management service. After the negotiation path is ready, a negotiation path opening request is sent to the transmission management service. The negotiation channel opening request is used for indicating to establish a negotiation channel, and the negotiation channel opening request may include, but is not limited to, a peer device identifier (i.e. a table lamp identifier) and a Session Name.
In this embodiment, the negotiation channel opening request is actively initiated by the tablet side, that is, the tablet needs to establish a connection with the desk lamp. At this time, the desk lamp may be understood as a server, and the tablet may be understood as a client that needs to access the server.
S312, the transmission management service in the tablet establishes a negotiation channel with the device interconnection service in the desk lamp.
After receiving the negotiation channel opening request, the transmission management service interacts with the device interconnection service in the desk lamp according to the Session Name to establish a negotiation channel. Wherein, establishing the negotiation channel can be concretely establishing the session, and determining the session identification.
S313, the device interconnection service in the desk lamp sends a negotiation channel successful establishment instruction to the camera component in the hardware abstraction service.
After the establishment of the negotiation channel is completed, the device interconnection service in the desk lamp sends an indication of successful establishment of the negotiation channel to the camera component in the hardware abstraction service so as to indicate that the establishment of the negotiation channel is completed and that the devices needing to establish connection currently exist. The negotiation channel successful establishment indication may include, but is not limited to, device information (i.e., tablet device information) that needs to establish a connection, and a session identifier.
S314, the transmission management service in the tablet sends a negotiation channel successful establishment instruction to the device management module of the hardware virtualization service.
After the negotiation channel is established, the transmission management service in the tablet sends a negotiation channel successful establishment instruction to the device management module of the hardware virtualization service so as to indicate that the establishment of the negotiation channel is completed and that the device needing to establish the connection exists currently. The negotiation channel successful establishment indication may include, but is not limited to, device information (i.e., desk lamp device information) that needs to establish a connection, and a session identifier.
The present embodiment does not limit the execution order of S313 and S314.
S315, the device management module of the hardware virtualization service in the tablet sends a device information negotiation request to the camera component of the hardware abstraction service in the desk lamp based on the negotiation channel.
The device negotiation request may include, but is not limited to, device information (such as a device name, a device identifier, a device type, etc.) and a control channel connection request.
S316, after the camera component of the hardware abstraction service in the desk lamp receives the device negotiation request, a control channel is prepared, and device negotiation information is returned to the device management module of the hardware virtualization service in the tablet.
After receiving the equipment negotiation request, the camera component of the hardware abstraction service in the desk lamp analyzes the equipment negotiation request to acquire the equipment information of the opposite end, determines an IP address and a port to be monitored according to the control channel connection request, adds the IP address and the port into the equipment negotiation information, and returns the equipment negotiation information to the equipment management module of the hardware virtualization service in the tablet.
It is noted that the device negotiation request and the device negotiation information are transmitted based on the established negotiation channel.
S317, after the device management module of the hardware virtualization service in the tablet receives the returned device negotiation information, the negotiation channel is closed.
Wherein closing the negotiation channel may specifically be closing the session. After the device management module of the hardware virtualization service in the tablet receives the returned device negotiation information, the session ends, and the device management module of the hardware virtualization service can close the corresponding session according to the session identifier.
S318, the device management module of the hardware virtualization service in the tablet sends a control channel opening request to the transmission management service.
And the control channel opening request is used for indicating to establish network communication connection with the desk lamp. The control channel opening request may include, but is not limited to, a communication protocol, a source IP, a source port, a destination IP, and a destination port, where the destination IP and the destination port are an IP and a port monitored by a camera component of a hardware abstraction service in the desk lamp.
S319, the transmission management service in the tablet is connected with a control channel of a camera component of the hardware abstraction service in the desk lamp, and a successful connection instruction of the control channel is sent to a device management module of the hardware virtualization service in the tablet.
The transmission management service in the tablet receives the control channel opening request, and establishes control channel connection with the table lamp according to the information carried by the control channel opening request, namely, establishes network communication connection between the tablet and the table lamp. Furthermore, the device management module of the hardware virtualization service in the tablet and the camera component of the hardware abstraction service in the desk lamp can perform network communication based on the control channel.
After the control channel is successfully established, the transmission management service in the tablet sends a control channel successful connection indication to the device management module of the hardware virtualization service in the tablet. The indication of successful connection of the control channel may include, but is not limited to, a connection success identifier and information related to the control channel.
S321, the device management module of the hardware virtualization service in the tablet sends a device connection success indication to the hardware virtualization API.
The device connection success indication may include, but is not limited to, a connection success identifier and connected device information.
S322, the hardware virtualization API in the tablet sends a device capability request to a device management module of the hardware virtualization service.
After receiving the device connection success indication, the hardware virtualization API sends a device capability request to a device management module of the hardware virtualization service. The device capability request may be used to request to obtain virtualization capability information of the peer device (i.e., the desk lamp). Exemplary virtualized device capability information includes, but is not limited to, whether a virtualized camera is supported, whether a virtualized microphone is supported, and the like.
S323, the device management module of the hardware virtualization service in the tablet sends a device capability request to the camera component of the hardware abstraction service in the desk lamp in the control channel.
S324, the camera component of the hardware abstraction service in the desk lamp returns the device capability information to the device management module of the hardware virtualization service in the tablet in the control channel.
In this embodiment, the returned device capability information of the desk lamp may include at least the capability of supporting the virtualized camera and the camera identifier of the desk lamp.
S325, the device management module of the hardware virtualization service in the tablet sends device capability information to the hardware virtualization API.
The device management module of the hardware virtualization service in the tablet sends the received device capability information to the hardware virtualization API so that the hardware virtualization API can know whether the desk lamp has the capability of supporting the virtualized camera.
S326, the hardware virtualization API in the tablet sends a virtual camera enabling request to the device management module of the hardware virtualization service.
The hardware virtualization API in the tablet knows that the desk lamp has the capability of supporting the virtualized camera and sends a virtual camera enabling request to the device management module of the hardware virtualization service. The virtual camera enabling request may include, but is not limited to, a request type and a camera identifier of a desk lamp.
S327, the device management module of the hardware virtualization service in the tablet registers the virtual camera in the virtual camera HAL.
After receiving the virtual camera enabling request, the device management module of the hardware virtualization service sends a virtual camera registration request to the virtual camera HAL. The virtual camera registration request may include, but is not limited to, a request type and a camera identification of the desk lamp. After receiving the virtual camera registration request, the virtual camera HAL registers a virtual camera driver for the camera of the desk lamp in the virtual camera HAL, assigns a camera ID (i.e. virtual camera ID) for the camera of the desk lamp, and registers the camera ID in the system. Thus, the mapping relation between the desk lamp camera and the virtual camera is established in the virtual camera HAL.
S328, the device management module of the hardware virtualization service in the tablet sends a service state update indication to the camera component of the hardware abstraction service in the desk lamp.
The service state update instruction is used for instructing a camera component of the hardware abstraction service in the desk lamp to update the virtualized service state of the camera component. The virtualized service state may include an occupied state, an unoccupied state, or may include a registered state, an unregistered state, among others. Exemplary service status update indications may include, but are not limited to, device information of a peer device (i.e., a desk lamp), a hardware identifier (e.g., a desk lamp camera identifier), and a virtualized service status corresponding to the hardware identifier.
S329, the camera component of the hardware abstraction service in the desk lamp updates the service state according to the service state update instruction.
When the virtualized service state corresponding to the desk lamp camera is indicated to be the occupied state (or called registration state) in the service state update indication, the camera component updates the virtualized service state corresponding to the desk lamp camera to be the occupied state (or called registration state).
S330, the device management module of the hardware virtualization service in the tablet sends a virtual camera enabling success indication to the hardware virtualization API.
The virtual camera enabling success indication may include, but is not limited to, an enabling success identifier (or called a virtualization success identifier), a camera identifier of the desk lamp, and a camera ID corresponding to the virtual camera (or called a camera ID corresponding to the desk lamp camera).
The present embodiment does not limit the execution order of S328 and S330.
S331, the hardware virtualization API in the tablet sends a virtual camera enabling success indication to the education APP.
3. Virtual camera preview access phase
S3.1, the hardware virtualization API in the tablet sends a virtual camera access instruction to the camera service.
Virtual camera access instructions refer to instructions for invoking a virtual camera. The virtual camera access instruction may include, but is not limited to, an instruction type, a virtual camera ID, and a camera configuration parameter, where the configuration parameter includes, but is not limited to, a camera resolution and an acquisition frame rate.
S3.2, the camera service in the tablet sends an image preview request to the virtual camera HAL according to the virtual camera access instruction.
After receiving the virtual camera access instruction, the camera service generates a corresponding image preview request according to the virtual camera ID and sends the corresponding image preview request to the virtual camera HAL. Wherein the image preview request is for requesting a preview image data stream. Illustratively, the image preview request may include, but is not limited to, a request identification, a virtual camera ID, camera configuration parameters, and the like.
S3.3, the virtual camera HAL in the tablet sends an image preview request to the hardware virtualization service.
After receiving the image preview request, the virtual camera HAL determines the matched virtualized hardware identification according to the virtual camera ID carried in the image request. In this embodiment, the virtual camera HAL determines the linked desk lamp camera according to the virtual camera ID and the mapping relationship between the virtual camera ID and the desk lamp camera, and generates a corresponding image request according to the determined virtualized hardware identifier, and sends the image request to the hardware virtualized service. By way of example, the image preview request may include, but is not limited to, a request identifier, device information (i.e., table lamp information), virtualized hardware identifier (i.e., table lamp camera identifier), camera configuration parameters, and the like.
And S3.4, the hardware virtualization service in the tablet sends an image preview request to the transmission management service.
The hardware virtualization service sends an image preview request to the transport management service. The image preview request may include, but is not limited to, a request identifier, device information (i.e., table lamp information), a virtualized hardware identifier (i.e., table lamp camera identifier), and a camera configuration parameter.
When the hardware virtualization service in the tablet sends an image preview request to the transmission management service, if the hardware virtualization service finds that the data channel is not established with the desk lamp, the hardware virtualization service generates a data channel establishment request and sends the data channel establishment request to the transmission management service. And the data channel establishment request is used for indicating the transmission of data with the desk lamp. The data channel establishment request may include, but is not limited to, session identifier, connection information, data codec mode, etc.
The transmission management service in the flat plate receives the data channel establishment request, and establishes data channel connection with the desk lamp according to the information carried by the data channel establishment request, namely, establishes a data channel between the flat plate and the desk lamp. Further, the in-tablet transmission management service and the in-desk device interconnection service may transmit various data including, but not limited to, image data based on the data channel.
After the data channel is successfully established, the in-plane transmission management service sends a data channel successful connection indication to the in-plane hardware virtualization service, and the in-desk lamp device interconnection service sends a data channel successful connection indication to the camera component in the hardware abstraction service. The indication of successful connection of the data channel may include, but is not limited to, a connection success identifier and information related to the data channel.
And S3.5, the transmission management service in the tablet transmits an image preview request to the device interconnection service of the desk lamp.
And the transmission management service in the tablet determines a corresponding control channel according to the equipment information carried in the image preview request, and transmits the image preview request to the equipment interconnection service of the desk lamp in the control channel.
S3.6, the device interconnection service in the desk lamp sends an image preview request to the camera driver.
After receiving the image preview request, the device interconnection service in the desk lamp determines a hardware driver (in this embodiment, determines a camera driver) according to the virtualized hardware identifier, and sends the corresponding image preview request to the camera driver.
S3.7, the camera in the desk lamp drives the camera to collect images, and preview image data are transmitted to the hardware virtualization service of the tablet through the data channel.
The camera drives the camera to start, and drives the camera to collect images according to camera configuration parameters carried in the image preview request, so as to obtain a preview image data stream, and the preview image data stream is sent to the device interconnection service by the hardware abstraction service, so that the device interconnection service continuously transmits the preview image data stream to the hardware virtualization service of the tablet in the data channel. The packetization and encoding and decoding processes of the preview image data stream are not described herein.
S3.8, the in-panel hardware virtualization service sends the preview image data to the virtual camera HAL.
The hardware virtualization service continues to receive the preview image data stream and sends the preview image data stream to the virtual camera HAL.
S3.9, the virtual camera HAL in the tablet sends the preview image data to the camera service.
At this time, the virtual camera HAL continuously acquires preview image data acquired by the desk lamp camera, and continuously transmits the preview image data to the camera service.
And S3.10, the camera service in the tablet sends the preview image data to the education APP.
S3.11, displaying preview images by the education APP in the tablet.
After the education APP receives the preview image data stream through the camera service, the preview image can be displayed in the corresponding interface.
4. Virtual camera photographing stage
S4.1, responding to the received user operation, and sending a photographing request to the hardware virtualization service by the education APP in the tablet.
The user operation may be, for example, an operation of clicking a photographing option. In response to the received user operation, the education APP in the tablet sends a photographing request to the hardware virtualization service. The photographing request may include, but is not limited to, a photographing image sequence number, device information (i.e., table lamp information), a virtualized hardware identifier (i.e., table lamp camera identifier), a camera configuration parameter, and the like. Camera configuration parameters include, but are not limited to, image resolution.
The photographing request can also carry a task identifier so as to ensure orderly management of multiple photographing tasks.
S4.2, the hardware virtualization service in the tablet sends a photographing request to the transmission management service.
S4.3, the transmission management service in the tablet transmits a photographing request to the device interconnection service of the desk lamp.
And the transmission management service in the tablet determines a corresponding control channel according to the equipment information carried in the image preview request, and transmits the image preview request to the equipment interconnection service of the desk lamp in the control channel.
S4.4, the equipment interconnection service in the desk lamp sends a photographing request to the camera driver.
After receiving the image preview request, the device interconnection service in the desk lamp determines a hardware driver for response (in this embodiment, determines a camera driver) according to the virtualized hardware identifier, and sends a corresponding photographing request to the camera driver.
S4.5, the camera in the desk lamp drives the camera to shoot an image, and shot image data is transmitted to the hardware virtualization service of the tablet through the data channel.
The camera drives the camera to acquire images according to camera configuration parameters carried in the photographing request, photographing image data are obtained, and the photographing image data are sent to the equipment interconnection service through the hardware abstraction service, so that the equipment interconnection service continuously transmits the photographing image data to the hardware virtualization service of the tablet in the data channel. The packetization and encoding and decoding processes of the captured image data are not described herein.
And S4.6, the hardware virtualization service in the tablet sends the shot image data to the education APP.
S4.7, displaying the shot image by the education APP in the plate.
After the education APP receives the shot image through the hardware virtualization service, the shot image can be displayed in the corresponding interface.
In this embodiment, the virtual camera preview access stage is implemented based on the android native camera frame, and the virtual camera photographing stage is implemented based on the private virtualized camera frame, so that the processing path involved in the virtual camera photographing stage is shorter, and the photographing delay is smaller. Meanwhile, because the image preview is realized based on the camera frame of the android native, the education APP is less in change of the technical scheme provided by the embodiment for adapting.
It should be noted that the above-mentioned phase division in the flow is merely an exemplary expression, and the embodiment of the present application is not limited thereto. In addition, after the preview image is displayed on the process panel in the virtual camera preview access stage, the process of displaying the preview image in real time and the process in the virtual camera photographing stage can be simultaneously performed. Where the above processes are not explained in detail, reference may be made to the prior art, and they are not described in detail herein.
Fig. 4a shows a communication architecture of the collaborative system, which is used to complete the management of the tablet to the virtual camera (i.e. the desk lamp camera), the control command interaction between the tablet and the desk lamp, and the return and processing of the image data.
It should be noted that, instructions, requests, etc. transmitted across devices (i.e. transmitted between the tablet and the desk lamp) need to be encapsulated based on a communication protocol and a parameter sequence, etc., which will not be described in detail in this embodiment. The hardware virtualization service in the tablet may also manage the life cycle of previewing image streams and capturing images by dynamically allocating memory and dynamically destroying memory.
In addition, it should be noted that, before executing the cooperative working method provided in this embodiment, the education APP needs to bind with the desk lamp and register the desk lamp in the cloud server.
The embodiment of the application provides a frame scheme that android system equipment uses the camera of external equipment to take a picture, and this scheme not only can be applied in educational scene, is applicable to other equipment that is provided with the camera simultaneously, and these equipment can share its camera ability and give android system equipment such as cell-phone, tablet to realize android system equipment and establish the interconnection intercommunication of these equipment.
Fig. 5 a-5 b illustrate an application scenario. As shown in fig. 5a (1), the interface 401 is displayed on a flat panel display, and a plurality of application icons are displayed on the interface 401, and the user clicks the educational application icon 4011. In response to the received user operation, the tablet opens the educational application, and the tablet displays an educational application interface, which may be referred to as (2) in fig. 5 a. As shown in fig. 5a (2), the educational application interface 402 is displayed on the panel, and various function options of the educational application are displayed on the interface 402, including but not limited to a word search function, a click-to-read function, a job function, a photo function, etc. With the user using the photographing function capability, the user clicks the photographing function option 4021, and in response to the user operation, the tablet executes the flows of the device discovery phase and the virtual camera service enabling phase and the virtual camera preview access phase.
In the device discovery stage, if the number of the table lamps which can be linked with the tablet is one and obtained by filtering by the device management service of the tablet, the tablet automatically executes the processes of the virtual camera service enabling stage and the virtual camera preview access stage, and displays an interface shown in (1) in fig. 5b, for example. In the device discovery stage, if the number of the table lamps which can be linked with the tablet is a plurality of the table lamps obtained through filtering by the device management service of the tablet, the tablet displays a table lamp selection interface. For example, a list of the desklamps to be linked is displayed on the desk lamp selection interface, the user may perform a selection operation, and in response to the selection operation by the user, the educational application determines one desk lamp to be linked and continues to perform the processes of the virtual camera service enabling phase and the virtual camera preview access phase to display the interface shown in (1) of fig. 5b, for example.
As shown in fig. 5b (1), an image preview window 4031 and a photographing option 4032 are displayed in the interface 403, and a preview image acquired by the desk lamp camera in real time is displayed in the image preview window 4031. At this time, if the user clicks the photographing option 4032, the tablet performs a process of the photographing stage of the virtual camera in response to the user operation, and displays an interface shown in (2) of fig. 5b, for example. As shown in an interface 404 of fig. 5b (2), an image captured by the desk lamp camera is displayed in an image preview window 4041. At this time, when the user clicks the confirm option 4041, the tablet saves the captured image in response to the user operation, and continues to display the preview interface shown in fig. 5b (1), for example. If the user clicks the cancel option 4042, the tablet may display a preview interface, for example, as shown in (1) of fig. 5b, in response to a user operation.
It should be noted that, the interface shown in fig. 5b (2) is merely an exemplary example, and the image captured by the desk lamp camera may be displayed in other areas of the interface instead of the image preview window 4041, and the preview image captured by the desk lamp camera in real time is continuously displayed in the image preview window 4041, which is not limited in this application.
The cooperative working method provided by the embodiment mainly explains a low-cost technical scheme for realizing the online education function based on the combination of the tablet equipment and the desk lamp equipment. The technical solution provided in this embodiment is described below in connection with several different functions related to online education, respectively.
Scene one
Referring to the application scenario schematic diagram shown in fig. 1a, the technical scheme is illustrated by taking a word searching function as an example. When a student encounters an unrecognized word, the finger can point to the lower part of the word, a picture is shot by using the desk lamp camera, the image is identified by the flat plate to determine the content of the word, and the meaning of the word is fed back to the student through the display screen after the online word searching is completed, for example, the word explanation is displayed and broadcasted on an interface.
FIG. 6a is a schematic diagram showing the interaction of the modules. Referring to fig. 6a, the embodiment of the present application provides a method flow for cooperative work of a tablet and a desk lamp, specifically including:
S501, responding to the operation of clicking the word searching function by a user, executing the processes of a device discovery stage, a virtual camera enabling stage and a virtual camera preview access stage by the tablet and the table lamp, and displaying a preview interface by the tablet.
For the flow of the device discovery phase, the virtual camera enabling phase, and the virtual camera preview access phase, reference may be made to the foregoing, and details thereof will not be repeated herein.
It should be noted that in the virtual camera preview access phase, the hardware virtualization API in the tablet will carry the camera configuration parameters in sending the virtual camera access instruction to the camera service. The camera configuration parameters may include, but are not limited to, image resolution and image acquisition frame rate, among others. The desk lamp camera is set according to the received configuration parameters, and the preview image data is acquired according to the corresponding image resolution and image acquisition frame rate.
In this scenario, the tablet needs to accurately identify the preview image to determine the text content pointed by the user, so that the image quality requirement of the present scenario on the preview image is high, for example, the image resolution may be set to 1080P. Thus, a higher success rate of word searching can be ensured.
S502, the education APP in the tablet performs finger recognition on the preview image.
The education APP may perform finger recognition for each received preview image, or may perform finger recognition for the latest received preview image at regular time, which is not limited in this embodiment.
For example, the education APP may integrate an image recognition algorithm to implement an image recognition operation, and the education APP may also call an image recognition service to perform the image recognition operation, which is not limited in this embodiment.
The image recognition algorithm may refer to the prior art, and this embodiment is not described herein.
S503, in response to the operation of using finger words by the user, the education APP in the tablet identifies the fingers of the user in the preview image.
When a user points to a certain word in the book by using a finger, the desk lamp camera can acquire a preview image of the finger word, and then the education APP in the tablet can identify the finger of the user in the preview image.
When the education APP carries out finger recognition on the preview image, if a finger is recognized, position information, such as coordinate information, of the finger in the preview image can be obtained.
S504, the education APP in the tablet determines an ROI (region of interest ) image according to the position of the finger in the preview image.
After the educational APP identifies the user's finger in the preview image, the ROI image may be determined from the finger's position information in the preview image. The education APP can determine the ROI information according to the coordinate information of the finger in the preview image, wherein the ROI information comprises, but is not limited to, the coordinates of a central point and the region range (such as width and height information). Furthermore, the educational APP can crop out the ROI image in the preview image based on the ROI information.
S505, education APP in the plate accurately identifies the ROI image and determines the word to be interpreted.
The education APP may integrate an image recognition algorithm to accurately recognize the ROI image, and may call an image recognition service to accurately recognize the ROI image to determine the word to be interpreted, which is not limited in this embodiment.
The image recognition algorithm may refer to the prior art, and this embodiment is not described herein.
S506, the education APP in the tablet performs word searching operation on the word to be interpreted, and displays the paraphrasing of the word to be interpreted.
After determining the word to be interpreted, the education APP can perform online word searching operation or perform word searching operation in a database to obtain the paraphrasing of the word to be interpreted. Furthermore, the educational APP can display the paraphrasing of the new word to be interpreted for the user to view. In addition, the education APP may read the paraphrasing of the displayed new word, which is not limited in this embodiment.
Similarly, the user may perform the pointing operation using a pointing tool (or pointing tool) such as a stylus, which is not limited in this embodiment. Correspondingly, the education APP identifies the word pointing tool on the preview image to determine whether the user has word searching intention, and determines the ROI image according to the position information of the word pointing tool on the preview image.
Similarly, the user may also point to a picture in the book using a pointing device such as a finger or stylus. Correspondingly, the education APP determines the ROI image according to the position information of the finger or the finger-word tool such as the touch pen, performs picture content identification on the ROI image, displays the paraphrasing corresponding to the picture, and can read the displayed paraphrasing of the picture. In this case, the present embodiment will not be described in detail.
Fig. 1a, 7 a-7 b show an exemplary application scenario. As shown in fig. 7a (1), the tablet display education APP interface 701 displays various function options of the education application on the interface 701, including but not limited to a word search function, a click-to-read function, a job function, a photo function, etc. The user clicks the word search function option 7011, and in response to the user operation, the tablet executes the flows of the device discovery phase and the virtual camera service enabling phase, and the virtual camera preview access phase.
In the device discovery stage, if the number of the table lamps which can be linked with the tablet is one and obtained by filtering by the device management service of the tablet, the tablet automatically executes the processes of the virtual camera service enabling stage and the virtual camera preview access stage, and displays an interface shown in (2) in fig. 7a, for example. In the device discovery stage, if the number of the table lamps which can be linked with the tablet is a plurality of the table lamps obtained through filtering by the device management service of the tablet, the tablet displays a table lamp selection interface. For example, a list of the desklamps to be linked is displayed on the desk lamp selection interface, the user may perform a selection operation, and in response to the selection operation by the user, the educational application determines one desk lamp to be linked and continues to perform the processes of the virtual camera service enabling phase and the virtual camera preview access phase to display the interface shown in (2) of fig. 7a, for example.
As shown in fig. 7a (2), an image preview window 7021 and a word searching function operation diagram 7022 are displayed in the interface 702. The preview window 7021 displays a preview image acquired by the desk lamp camera in real time. The user may perform a word or figure pointing operation with reference to the word searching function operation diagram 7022 to trigger the word searching function. The education APP carries out word or figure recognition according to the preview image. With continued reference to fig. 7a (2), when the user uses a finger in a book, the desk lamp camera captures a finger preview image, which is displayed in the image preview window 7021. Further, the educational APP can identify the user's finger in the preview image and determine the position information of the finger in the preview image, such as coordinate information, etc. The education APP determines an ROI image according to the position of the finger in the preview image, and accurately identifies the ROI image to determine the word to be interpreted. After the education APP queries the paraphrasing of the new words, the corresponding paraphrasing of the new words is displayed on the interface, and the education APP can refer to FIG. 7 b.
However, in the above procedure, in order to ensure the success rate of word searching, the table lamp camera needs to continuously return to the preview image stream with high resolution (for example 1080 p), and the requirement on bandwidth is relatively high, and a bandwidth of 4-8Mbps is required. Therefore, the requirements of the desk lamp on the hardware chip are higher, so that the cost of the desk lamp can be increased.
In order to realize the scheme of realizing the word searching function by the cooperation of the flat plate and the desk lamp and reduce the hardware cost of the desk lamp, the embodiment also provides a technical scheme. In this case, since finger recognition or word recognition is performed in the image and the requirement of position recognition on the image resolution is not high, a low-resolution (for example, 480 p) preview image stream is continuously returned by using a desk-lamp camera in the virtual camera preview access stage. If the education APP identifies the user's word or figure operation based on the preview image, triggering the desk lamp to shoot a high-resolution (1080 p for example) image, so that the education APP can accurately identify and determine the word or picture to be interpreted according to the high-resolution image. Therefore, the desk lamp camera is used for continuously returning the low-resolution preview image stream, the bandwidth requirement is not high, only 0.5-1Mbps bandwidth is needed, and only more bandwidth is needed when the high-resolution image is transmitted. Therefore, the technical scheme not only reduces the requirement of the desk lamp on the hardware chip and reduces the cost of the desk lamp, but also can ensure the success rate of word searching.
Fig. 6b shows an interaction diagram of the modules. Referring to fig. 6b, the embodiment of the present application provides a method flow for cooperative work of a tablet and a desk lamp, specifically including:
s601, responding to the operation of clicking the word searching function by a user, executing the processes of a device discovery stage, a virtual camera enabling stage and a virtual camera preview access stage by the tablet and the table lamp, and displaying a preview interface by the tablet.
For the flow of the device discovery phase, the virtual camera enabling phase, and the virtual camera preview access phase, reference may be made to the foregoing, and details thereof will not be repeated herein.
It should be noted that in the virtual camera preview access phase, the hardware virtualization API in the tablet will carry the camera configuration parameters in sending the virtual camera access instruction to the camera service. The camera configuration parameters may include, but are not limited to, image resolution and image acquisition frame rate, among others. The desk lamp camera is set according to the received configuration parameters, and the preview image data is acquired according to the corresponding image resolution and image acquisition frame rate.
In this scenario, in order to reduce the bandwidth occupied by the preview image stream, in the quasi-camera preview access phase, the desk-lamp camera may collect the preview image according to the low resolution. For example, the hardware virtualization API in the tablet may send a virtual camera access instruction to the camera service with a first configuration parameter, where the first configuration parameter includes a first image resolution (e.g., 480P). And setting the table lamp camera according to the received configuration parameters, and acquiring preview image data according to the first image resolution and the corresponding image acquisition frame rate.
S602, the education APP in the tablet performs finger recognition on the preview image.
S603, in response to the operation of using finger words by the user, the education APP in the tablet identifies the fingers of the user in the preview image.
S604, the education APP in the tablet determines ROI information according to the position of the finger in the preview image, and generates a photographing request according to the ROI information.
The ROI information refers to information for determining the ROI, and may include, but is not limited to, center point coordinates and region ranges (e.g., width-height information).
S605, the tablet transmits a photographing request to the desk lamp side.
The photographing request may include, but is not limited to, a virtual camera ID corresponding to the desk lamp camera, a second configuration parameter of the desk lamp camera, and ROI information. The second configuration parameters include, but are not limited to, a second image resolution that is higher than the first image resolution, such as the second image resolution set to 1080P. Therefore, the education APP can accurately identify and determine the word or picture to be interpreted according to the high-resolution image.
S606, the camera of the desk lamp is set according to a second configuration parameter carried in the photographing request, images are photographed according to a second image resolution, and the photographed images are sent to the hardware abstraction service.
S607, the hardware abstraction service determines the ROI image according to the ROI information.
The hardware abstraction service may crop the ROI image in the captured image based on the ROI information.
In an alternative embodiment, the photographing request includes, but is not limited to, the second configuration parameter, but does not include ROI information. Like this, the dull and stereotyped request of shooing is transmitted to the desk lamp side, and the camera of desk lamp sets up according to the second configuration parameter that carries in the request of shooing to shoot the image according to second image resolution, and returns the image of shooing to education APP in the dull and stereotyped. Furthermore, the educational APP may determine the ROI image from the ROI information, for example, crop the ROI image from the captured image based on the ROI information.
S608, the desk lamp transmits the ROI image to the education APP in the tablet.
Compared with the table lamp, the method has the advantages that the high-resolution shooting image is directly returned to the education APP in the panel, the cut ROI image is returned to the education APP in the panel, the data transmission quantity can be reduced, and the bandwidth occupation is reduced.
S609, education APP in the plate accurately identifies the ROI image and determines the word to be interpreted.
S610, the education APP in the tablet performs word searching operation on the word to be interpreted, and displays the paraphrasing of the word to be interpreted.
The present process is not explained in detail, and reference is made to the foregoing, and will not be repeated here.
Similarly, the user may use a pointing tool such as a stylus to perform a pointing operation, which is not limited in this embodiment. Correspondingly, the education APP identifies the word pointing tool on the preview image to determine whether the user has word searching intention, and determines the ROI information according to the position information of the word pointing tool on the preview image.
Similarly, the user may also point to a picture in the book using a pointing device such as a finger or stylus. Correspondingly, the education APP determines the ROI information according to the position information of the finger or the finger-word tool such as a touch pen, determines the ROI image in the shot image for the ROI information, performs picture content identification for the ROI, displays the paraphrasing corresponding to the picture, and can read the displayed picture paraphrasing. In this case, the present embodiment will not be described in detail.
For the application scenario of this flow, reference may be made to the application scenario shown in fig. 1a, fig. 7 a-fig. 7 b. Referring to fig. 7a (2), when a user uses a finger in a book, a desk lamp camera captures a finger preview image, which is displayed on an image preview window 7021. Furthermore, the education APP can identify the finger of the user in the preview image, determine the ROI information, and generate a photographing request according to the ROI information and the image high resolution to trigger the desk lamp camera to photograph the image. The desk lamp camera shoots images according to the high resolution of the images, the desk lamp side cuts the high resolution shooting images according to the ROI information to obtain the ROI images, and the ROI images are returned to the education APP in the flat plate. The education APP accurately identifies the ROI image and determines the word to be interpreted. After the education APP queries the paraphrasing of the new words, the corresponding paraphrasing of the new words is displayed on the interface, and the education APP can refer to FIG. 7 b.
Scene two
Referring to the application scenario schematic diagram shown in fig. 1a, the present scenario is illustrated by taking a job function as an example. When students need to submit homework online, the students can click to shoot in the education APP, the desk lamp camera is used for shooting homework images, and the homework images are uploaded to the database through the education APP.
FIG. 8 is a schematic diagram showing the interaction of the modules. Referring to fig. 8, the embodiment of the application provides a method flow for cooperative work of a tablet and a desk lamp, specifically including:
s801, in response to operation of clicking a job function by a user, the education APP in the tablet displays a job submission list.
Note that the job submission list refers to a list including a plurality of job submission options, one job submission option corresponding to each job, and can be referred to as interface 704 in fig. 9 a. The job options may be divided according to disciplines or time, which is not limited in this embodiment.
If only one job image is submitted for one job in the job function of the education APP, the education APP does not display the job submission list. At this time, in response to the operation of clicking the job function by the user, the tablet and the desk lamp execute the flows of the device discovery stage, the virtual camera enabling stage, and the virtual camera preview access stage, and the tablet displays the preview interface.
S802, responding to the click of a user to submit a job option, the flat panel and the desk lamp execute the processes of a device discovery stage, a virtual camera enabling stage and a virtual camera preview access stage, and the flat panel displays a preview interface.
For the flow of the device discovery phase, the virtual camera enabling phase, and the virtual camera preview access phase, reference may be made to the foregoing, and details thereof will not be repeated herein.
It should be noted that in the virtual camera preview access phase, the hardware virtualization API in the tablet will carry the camera configuration parameters in sending the virtual camera access instruction to the camera service. The camera configuration parameters may include, but are not limited to, image resolution and image acquisition frame rate, among others. The desk lamp camera is set according to the received configuration parameters, and the preview image data is acquired according to the corresponding image resolution and image acquisition frame rate.
In this scenario, in order to reduce the bandwidth occupied by the preview image stream, in the quasi-camera preview access phase, the desk-lamp camera may collect the preview image according to the low resolution. For example, the hardware virtualization API in the tablet may send a virtual camera access instruction to the camera service with a first configuration parameter, where the first configuration parameter includes a first image resolution (e.g., 480P). And setting the table lamp camera according to the received configuration parameters, and acquiring preview image data according to the first image resolution and the corresponding image acquisition frame rate.
S803, responding to the operation of clicking the photographing option by the user, and generating a photographing request by the education APP in the tablet.
When a user places a job or a book and the like in the acquisition area of the desk lamp camera, the user can click a photographing option to trigger the desk lamp camera to photograph a job image.
The photographing request may include, but is not limited to, a virtual camera ID corresponding to the desk lamp camera, and a second configuration parameter of the desk lamp camera. The second configuration parameters include, but are not limited to, a second image resolution that is higher than the first image resolution, such as the second image resolution set to 1080P. Thus, the education APP can upload the high-resolution operation images.
S804, the tablet transmits a photographing request to the desk lamp side.
S805, the camera of the desk lamp is set according to a second configuration parameter carried in the photographing request, and images are photographed according to a second image resolution.
S806, transmitting the shot image to the education APP in the flat plate by the table lamp.
S807, education APP in the tablet displays the photographed image.
The education APP in the tablet receives the operation image shot by the desk lamp camera, and displays the operation image. If the user is satisfied with the captured job image, the user may click on a submit option to upload the job image to the database. If the user is not satisfied with the photographed job image, the user may re-click the photographing option to trigger the desk lamp camera to re-photograph the job image.
S808, in response to the user clicking the submit option, the education APP in the tablet uploads the shot image to the database.
The present process is not explained in detail, and reference is made to the foregoing, and will not be repeated here.
It should be noted that the above mentioned job images are only exemplary examples, and the user may click on the photographing option to trigger the desk lamp camera to take other images. After the table lamp sends the shot image to the education APP in the tablet, the education APP can upload the received shot image to a corresponding database.
Fig. 1a, 9 a-9 c show an exemplary application scenario. As shown in fig. 9a (1), the tablet display education APP interface 701 displays various function options of the education application on the interface 701, including but not limited to a word search function, a click-to-read function, a job function, a photo function, etc. The user clicks the word search function option 7012, and in response to a user operation, the education APP in the tablet displays a job submission list, as shown in (2) of fig. 9 a. In the job submission list interface 704 shown in fig. 9a (2), a plurality of submitted job options (e.g., submitted job 1, submitted job 2, submitted job 3, submitted job 4, etc.) are displayed, with different job submission options corresponding to different jobs. Taking the example of a user needing to upload a job image for commit job 4, the user clicks commit job 4 option 7042. In response to a user operation, the tablet may display a job submission interface 705 as shown in fig. 9b (1).
With continued reference to (1) in fig. 9b, in the job submission interface 705, an image preview window 7041, a photograph option 7051, and a submission option 7052 are displayed. The preview window 7041 displays a preview image acquired by the desk lamp camera in real time. When a user places a job or a book or the like in the desk lamp camera acquisition area, the user can click the photographing option 7051 to trigger the desk lamp camera to photograph a job image. In response to user operation, the education APP generates a photographing request and sends the photographing request to the desk lamp side so as to call the desk lamp camera to photograph the operation image. The desk lamp camera shoots the operation image according to the high resolution of the image carried in the shooting request, and returns the shot operation image to the education APP in the tablet for display through the education APP, and reference can be made to an interface 706 as shown in (2) in fig. 9 b.
With continued reference to (2) in fig. 9b, an image preview window 7041, a photographing option 7051, a submitting option 7052, and a job image 7061 photographed by a desk lamp camera are displayed in the interface 706. A close option 7062 is also displayed in the job image 7061. If the user is not satisfied with the job image 7061, the close option 7062 may be clicked and the job image 7061 will not be displayed on the interface. At this time, the user may click on the photographing option 7051 to trigger the desk lamp camera to re-photograph the job image. If the user is satisfied with the job image 7061, the submit option 7052 may be clicked. In response to a user operation, the tablet may display an interface to be confirmed 701 as shown in fig. 9 c. Therein, a job image 7061 to be submitted and whether to confirm the submit window 7071 are displayed in the interface to be confirmed 701. If the user clicks on the confirmation option 7072 in the confirmation submission window 7071, the educational APP uploads the job image 7061 to the database in response to the user operation. If the user clicks the cancel option 7071 in the confirm submission window 7071, the education APP may display an interface as shown in (1) of fig. 9b in response to the user operation to trigger the desk lamp camera to re-photograph the job image with the standby click photographing option 7051.
Note that, with continued reference to (2) in fig. 9b, if the user is satisfied with the job image 7061, the submit option 7052 may be clicked. In response to a user operation, the educational APP may not display the interface as shown in fig. 9c any more, and directly upload the job image 7061 to the database. This embodiment is not limited thereto.
Scene three
Referring to the application scenario schematic diagram shown in fig. 1a, the present scenario is illustrated by taking a point-to-read function (or referred to as a read function) as an example. When students need to read the contents in books by the education APP, the desk lamp camera can be used for collecting book images in real time, so that the education APP can load corresponding book contents according to the book images, and the book contents to be read are determined to be read according to the finger positions or page turning operation of the students.
An interaction diagram of the modules is shown in fig. 10. Referring to fig. 10, the embodiment of the application provides a method flow for cooperative work of a tablet and a desk lamp, specifically including:
s901, responding to the operation of clicking the click-to-read function by a user, executing the processes of a device discovery stage, a virtual camera enabling stage and a virtual camera preview access stage by the tablet and the desk lamp, and displaying a preview interface by the tablet.
For the flow of the device discovery phase, the virtual camera enabling phase, and the virtual camera preview access phase, reference may be made to the foregoing, and details thereof will not be repeated herein.
It should be noted that in the virtual camera preview access phase, the hardware virtualization API in the tablet will carry the camera configuration parameters in sending the virtual camera access instruction to the camera service. The camera configuration parameters may include, but are not limited to, image resolution and image acquisition frame rate, among others. The desk lamp camera is set according to the received configuration parameters, and the preview image data is acquired according to the corresponding image resolution and image acquisition frame rate.
In this scenario, in order to reduce the bandwidth occupied by the preview image stream, in the quasi-camera preview access phase, the desk-lamp camera may collect the preview image according to the low resolution. For example, the hardware virtualization API in the tablet may send a virtual camera access instruction to the camera service with a first configuration parameter, where the first configuration parameter includes a first image resolution (e.g., 480P). And setting the table lamp camera according to the received configuration parameters, and acquiring preview image data according to the first image resolution and the corresponding image acquisition frame rate.
S902, identifying the preview image by the education APP in the tablet, and determining the name of the book.
The education APP may perform book information identification for each received preview image, or may perform book information identification for the latest received preview image at regular time, which is not limited in this embodiment.
For example, the education APP may integrate an image recognition algorithm to implement an image recognition operation, and the education APP may also call an image recognition service to perform the image recognition operation, which is not limited in this embodiment.
The image recognition algorithm may refer to the prior art, and this embodiment is not described herein.
S903, the education APP in the tablet computer searches the database according to the book names and loads book contents corresponding to the book names.
If the education APP in the tablet retrieves the database according to the book names and determines books with different versions, a corresponding book list can be displayed for the user to select. Further, in response to a user's selection operation of a certain version of a book, the educational APP loads content corresponding to the version of the book
S904, responding to page turning or finger click operation of a user, identifying the preview image by the education APP in the tablet, determining a paragraph to be read, and reading the corresponding paragraph.
After the book content is loaded, the education APP can recognize the instruction tool of the finger or the touch pen instruction for each received preview image, or can recognize the instruction tool of the finger or the touch pen instruction for the latest received preview image at regular time, which is not limited in the embodiment. In response to a click operation of a user, the education APP can identify the page number of the book and click position information of the user, such as coordinate information and the like, according to the preview image, and further can determine paragraphs to be read in the loaded book content according to the page number of the book and the click position information of the user, and read corresponding paragraphs.
After the book content is loaded, the education APP can also perform page turning identification according to the preview image stream. In response to a page turning operation of a user, the education APP can determine paragraphs to be read in the loaded book content according to the identified book page numbers, and read corresponding paragraphs.
The present process is not explained in detail, and reference is made to the foregoing, and will not be repeated here.
Fig. 1a, 11 a-11 b show an exemplary application scenario. As shown in (1) of fig. 11a, the tablet display education APP interface 701 displays various function options of the education application on the interface 701, including but not limited to a word search function, a click-to-read function, a job function, a photo function, etc. The user clicks the click function option 7013, and in response to the user operation, the tablet executes the flows of the device discovery phase and the virtual camera service enable phase, and the virtual camera preview access phase.
In the device discovery stage, if the number of table lamps that can be linked with the tablet is one, the tablet automatically executes the processes of the virtual camera service enabling stage and the virtual camera preview access stage, and displays an interface shown in (2) in fig. 11a, for example. In the device discovery stage, if the number of the table lamps which can be linked with the tablet is a plurality of the table lamps obtained through filtering by the device management service of the tablet, the tablet displays a table lamp selection interface. For example, a list of the desklamps to be linked is displayed on the desk lamp selection interface, the user may perform a selection operation, and in response to the selection operation by the user, the educational application determines one desk lamp to be linked and continues to perform the flow of the virtual camera service enabling phase and the virtual camera preview access phase to display the interface shown in (2) of fig. 11a, for example.
As shown in fig. 11a (2), an image preview window 7081 is displayed in the interface 708. The preview window 7081 displays a preview image acquired by the desk lamp camera in real time. The educational APP identifies the preview image to determine the book name. After identifying the book name, the educational APP retrieves the database based on the book name. If a corresponding book is retrieved, a loading operation of the book contents is performed, as shown in (1) of fig. 11 b.
With continued reference to fig. 11b (1), an image preview window 7081, an identified book name 7091, and a book content loading progress identifier 7092 are displayed in the interface 709. After the book content is loaded, the education APP can identify the click-to-read operation or page turning operation of the user according to the preview image. Taking the example of a user performing a page turning operation, referring to the interface 710 as shown in (2) in fig. 11b, a page turning action of the user may be displayed in the image preview window 7081. In response to a user operation, the education APP recognizes the page number of the book according to the preview image. Furthermore, the education APP can determine the paragraphs to be read in the loaded book content according to the identified book page numbers, and read the corresponding paragraphs.
In the collaborative work method provided by the embodiment of the application, the household flat plate and the table lamp with the camera are used for combining the professional online education experience. The camera of the desk lamp equipment is matched with the tablet education APP to complete the scenes of students such as finger word searching, homework submitting, book reading and the like needing photographing.
The electronic device 100 (for example, a tablet) and the IOT device 200 (for example, a table lamp) based on the above hardware structure and software structure are different in data transmitted to the tablet by the table lamp according to different services when the electronic device and the IOT device are mutually matched and applied to the learning scene. For example, in the above-mentioned click-to-read scenario, the table lamp transmits the preview stream data shot by the camera to the tablet, namely, the continuously transmitted video data stream; in the word searching scene and the submitting operation scene, the table lamp transmits image data shot by the camera to the tablet.
In particular, in practical application, when the desk lamp transmits preview stream data and image data to the tablet, the preview stream data generated at present can be packaged into a preview stream data packet based on the same data packet principle and with a set size as a unit, and a part of the whole image data is packaged into an image data packet, and then transmitted to the tablet side based on a Real-time transmission protocol (Real-time Transport Protocol, RTP). In this way, for the preview stream data, since the processing procedure of the package is not involved, the obtained preview stream data package is directly processed and displayed, that is, there is no distinction between the first package, the last package and the middle package, but for the image data, the finally displayed photo must be determined according to the position relationship between the small packages of the image data, and the package operation is triggered according to the last package. However, in the current image data transmission mode, the data body of the image data packet must be parsed to determine whether the image data packet is the first packet.
In addition, even if the data body of the image data packet is analyzed, it cannot be determined whether the image data packet is a tail packet, that is, a fast and accurate trigger group packet cannot be performed.
In view of this, the present application provides an image data transmission method, which is aimed at quickly and accurately determining whether a received image data packet is a first packet, a middle packet or a last packet without analyzing a data body of the received image data packet, so that when the received image data packet is identified as the last packet, a data packet operation can be performed.
Referring to fig. 12, a specific flow of image data transmission between a tablet and a desk lamp is shown by way of example:
s1001, the tablet generates a photographing request (including parameter information that the photographed picture needs to satisfy) in response to the operation behavior of the user.
It will be appreciated from the above description that the transmission of image data is only involved in the word finding scenario and the submitting job scenario, and thus the tablet responds to the user's operation, specifically the user triggering the operation of the word finding function, or the submitting job function.
Illustratively, in some implementations, the user performs an action, such as clicking on a control corresponding to a click-to-read function/scene displayed in the tablet current interface, or clicking on a control corresponding to a submit job function/scene.
Illustratively, in other implementations, the user may act by responding, for example, by commanding the tablet with voice, to select a word-finding function, or to submit a job function.
It should be understood that the above description is only an example for better understanding of the technical solution of the present embodiment, and is not to be taken as the only limitation of the present embodiment.
In addition, as can be seen from the above description, the resolution requirement on the image data is higher in the word searching scene and the submitting job scene than in the click-to-read scene. Therefore, in response to the operation behavior of the user, when the photographing request is generated, at least parameter information, such as resolution, that needs to be satisfied by the photographed picture needs to be carried.
S1002, the tablet sends a photographing request to the desk lamp.
After the tablet generates the photographing request, the tablet may send the photographing request to the desk lamp, for example, through a control channel negotiated with the desk lamp.
S1003, detecting whether the camera is abnormal by the table lamp.
In practical application, the camera of the desk lamp may not be able to shoot due to some factors, so that in order to reduce unnecessary processing as much as possible, such as analysis of shooting requests and judgment of parameter information, whether the camera of the desk lamp is abnormal or not, that is, whether the camera can be used normally or not, can be detected in the current scene.
Accordingly, if the camera is abnormal, the table lamp may generate an abnormal feedback packet, so as to facilitate distinguishing the abnormal feedback packet fed back when the camera is abnormal from the abnormal feedback packet 1, the abnormal feedback packet fed back when the parameter is abnormal from the abnormal feedback packet 2, and step S1004 is performed.
Otherwise, the parameter information carried in the photographing request is extracted only when the camera is normally available, and step S1006 is executed.
S1004, the desk lamp sends an abnormal feedback packet 1 to the flat panel.
Illustratively, in some implementations, the anomaly feedback packet 1 may be transmitted to the tablet through an image data channel negotiated with the tablet.
S1005, responding according to the cause of the abnormality in the abnormality feedback packet 1.
For example, in some implementations, in order to facilitate the flat panel to learn the reason of the abnormality in the abnormal feedback packet 1, an error code for identifying the abnormality may be carried in the data header in the abnormal feedback packet 1, so that the flat panel does not need to parse the data body, and can directly learn the reason of the abnormality according to the data header, and then make a corresponding response according to the reason of the abnormality. For example, the tablet may prompt the user interface to notify the user that an abnormality has occurred in the camera of the desk lamp, so that the user checks the camera of the desk lamp according to the prompt.
It should be understood that the above description is only an example for better understanding of the technical solution of the present embodiment, and is not to be taken as the only limitation of the present embodiment.
S1006, the table lamp extracts parameter information in the photographing request.
Specifically, when the camera of the desk lamp is normally available, the desk lamp can extract parameter information in a photographing request sent by the tablet, such as resolution information of a photo to be photographed.
S1007, the desk lamp detects whether the parameter information is abnormal.
Specifically, in this embodiment, the detection of the parameter information by the desk lamp is, for example, to determine whether the resolution information camera is supported.
Accordingly, if not, determining that the parameter is abnormal, and executing step S1008; if so, it is determined that the parameter information is normal, and step S1010 may be performed.
S1008, the desk lamp sends an abnormal feedback packet 2 to the flat panel.
Similarly, the anomaly feedback packet 2 may be transmitted to the tablet through an image data channel negotiated with the tablet.
In addition, it should be noted that, in practical application, the desk lamp side may also generate other abnormal feedback packets to feed back to the tablet side according to other abnormal reasons, such as network transmission abnormality, and the above description is only given of two specific abnormal scenarios, which is not a specific limitation of the embodiment.
S1009, responding according to the cause of the abnormality in the abnormality feedback packet 2.
Illustratively, the error code carried in the abnormal feedback packet 2 may also be stored in the data header of the abnormal feedback packet 2. Therefore, the flat plate can quickly acquire the cause of the abnormality according to the data head directly without analyzing the data body, and then makes corresponding response according to the cause of the abnormality. For example, the tablet may regenerate the photographing request according to the resolution supported by the table lamp and transmit the regenerated photographing request to the table lamp.
It should be understood that the above description is only an example for better understanding of the technical solution of the present embodiment, and is not to be taken as the only limitation of the present embodiment.
S1010, the table lamp adjusts shooting parameters of the camera according to the parameter information, and shoots by using the adjusted camera to obtain an image data packet.
The image data packet in this embodiment specifically includes image data (data body) of an entire picture to be displayed on the flat panel side, and a header allocated to the image data.
In order to better understand the above-described image data packet generation process, the following description is made with reference to fig. 13.
Referring to fig. 13, taking an example of taking an image taken with an adjusted camera, the size of image data of an entire picture to be displayed on the flat panel side is 10000 bytes. After obtaining the image data, based on the generation standard of the image data packet in the embodiment, the image data is required to be used as a data body, and a data header with the size of 128 bytes is added before the data body, so that the image data packet corresponding to the whole photo is obtained.
It will be appreciated that bytes start from 0 and thus the overall size of the image data packet is 128 bytes of data header +10000 bytes of data body, i.e. 10128 bytes. Wherein, 0-127 bytes correspond to the content of the data head, 128-10127 bytes correspond to the image data with 10000 bytes.
In this embodiment, the header in the image data packet includes two parts, one part is used for recording the identifier of whether the image data is normal, in this embodiment, the one part is referred to as an identifier field of whether the data is normal, and the other part is reserved for expanding the subsequent service, in this embodiment, the other part is referred to as an expansion field. For a better understanding of the data header of this structure, a description will be given below with reference to fig. 14.
Referring to fig. 14, of the 128 bytes of the header, 0 to 3 bytes are an identification for recording whether the image data is normal, i.e., an identification field for recording whether the data is normal, and 4 to 127 bytes are reserved as an extended use of the subsequent service, i.e., an extended field.
For example, in some implementations, it may be agreed that the user "0" identifies that the data is normal, and "1" identifies that the data is abnormal, i.e., if the content of 0 bytes is "0" which indicates that the image data in the data body is normal (essentially binary data of the photo that needs to be returned to the tablet), i.e., the corresponding photo may be restored through tablet processing, and if the content of 0 bytes is "1" which indicates that the image data in the data body is abnormal, such as a string that may be abnormal.
For example, in some implementations, different exception strings may be agreed to correspond to different exception causes, which facilitates locating the exception later and resolving the exception.
In addition, it should be noted that in some implementations, the expansion field may be further divided according to service requirements, for example, for a scenario that one desk lamp needs to transmit image data to a plurality of flat panels and mobile phone devices, a unique identifier of a photo to be shot may be carried in a shooting request may be agreed, and when the desk lamp generates the image data packet, an identifier for identifying the photo uniqueness may be added in 4-32 bytes in the expansion field.
It should be understood that the above description is only an example for better understanding of the technical solution of the present embodiment, and is not to be taken as the only limitation of the present embodiment.
S1011, the desk lamp packetizes the data in the image data packets according to the set size and the set size to obtain N image data packets (each image data packet comprises a data head, the recorded image data packet is a head packet, a middle packet or an identification field of a tail packet, and a data body for storing the image data).
It can be appreciated that, regarding the size of the image data packet, the size may be dynamically set according to the bandwidth resource of the transmission channel, the current network quality, and the like, or may be set to a fixed value, which is not limited in this embodiment.
For convenience of explanation, the present embodiment uses the above 10128 bytes as an example of the image data packet, and describes the process of dividing the image data packet into a plurality of image data packets with reference to fig. 15.
Referring to fig. 15, taking an example that the divided image data packet is still transmitted by using the RTP protocol, the data structure of the divided image data packet in this embodiment includes three parts, that is, an RTP data header, an identification field, and a data body.
With continued reference to fig. 15, for any one of the image data packets, 0 to 11 bytes are used as the header, 12 bytes are used as the identification field, and 13 to 1393 bytes are used as the body. That is, 13 to 1393 bytes of each image data packet are used to store data in the image data packet, so that there is only 1381 byte of data in the image data packet of the storage side in each image data packet.
In addition, since the header of 128 bytes in the image data packet is included in the data body as the first packet image data packet, the image data of the first packet image data packet, such as the image data packet 1 in fig. 15, is 1253 bytes, and the subsequent image data packet does not need to include the header of 128 bytes in the image data packet, and therefore is 1381 byte, and therefore, the image data packet of 10128 bytes can be split into 8 image data packets of the above configuration, and the image data packet 8 as the last packet is smaller than 1394 bytes, specifically, the RTP header of 12 bytes+1 bytes+589 bytes is the image data of the size of 601 bytes.
It should be understood that the above description is only an example for better understanding of the technical solution of the present embodiment, and is not to be taken as the only limitation of the present embodiment.
Referring to fig. 16, a specific format of an RTP header of each image data packet is exemplarily shown. As shown in fig. 16, the current version number (V) is stored in 0 bytes, the data type (T) of the image data is stored in 1 byte, the sequence number (SEQNUM) is stored in 2 bytes and 3 bytes, the time stamp (RTPTIME) is stored in 4 to 7 bytes, and the synchronization source identifier (SSRC) is stored in 8 to 11 bytes.
It should be noted that, the data types mentioned above may specifically be a portable network graphics (Portable Network Graphics, PGN) type, a joint image expert group (Portable Network Graphics, JPG) type, etc., which are not limited in this embodiment.
In addition, regarding the above-mentioned serial number, in practical application, the serial number is increased by 1 every time the desk lamp transmits one small image data packet.
In addition, the above-mentioned time stamp is specifically used for reflecting the acquisition time of the image data of the first byte in the data volume in the image data packet. In this way, the image data packets can be sequentially combined according to the time stamp, so that the image data packet of the whole photo is restored, and the tablet can process the image data of the photo and display the processed image data on the user interface.
In addition, the synchronization source identifier is specifically used for identifying the uniqueness of each small image data packet, so that the repeated transmission of the same small image data packet is avoided, and redundancy is caused to the buffer queue at the flat plate side.
Referring to fig. 17, a specific format of the identification field of each image data packet is exemplarily shown. As shown in fig. 17, packet identification information for identifying whether the current image data packet is a first packet, a middle packet or a last packet is stored in the 0 frames and 1 frames of 12 bytes, and state identification for identifying whether the current image data packet is normal is stored in the 2 frames, and the remaining 3-7 frames are used as reserved frames so as to be used for expansion according to service requirements.
That is, in the image data packet including the RTP header, the identification field, and the data body structure provided in this embodiment, the identification field is specially used for the flat panel to perform the packet grouping process.
Regarding the above-mentioned packet identification information, in some implementations, it may be agreed that the current image data packet is denoted as the first packet by "10" (the position of the 0 frame is "1", and the position of the 1 frame is "0"), that is, the first frame including the image data, and it is necessary to start from the first frame of image data in the image data packet when the packet is assembled.
For example, it may be further agreed that "01" (the position of the 0 frame is "0", the position of the 1 frame is "1") indicates that the current image data packet is a tail packet, that is, the tail frame including the image data, and when the image data packet is assembled, the image data packet is the last packet of the photo to be acquired this time, and when the packet is received, the packet assembling operation is triggered.
Illustratively, it may also be agreed that the current image data packet is a tundish, which is denoted by "00" (0 frame position is "0",1 frame position is "0").
In addition, it should be noted that, in consideration of the fact that, when image data of a plurality of photos is to be transmitted, there may be two photos before and after each other in the same image data packet, for example, the last frame of the previous photo and the first frame of the next photo are in the same image data packet, it may be agreed that the user "11" ((the position of the 0 frame is "1", the position of the 1 frame is "1") indicates that the current image data packet includes the last frame of the previous photo and the first frame of the next photo).
Regarding the status identification field, i.e., the contents filled in 2 frames in 12 bytes, can be classified into two types of "0" and "1", when filled with "0", it means that the current image data packet is normal, and when filled with "1", it means that the current image data packet is abnormal.
It should be understood that the above description is only an example for better understanding of the technical solution of the present embodiment, and is not to be taken as the only limitation of the present embodiment.
For better understanding of the two data structures of the normal and abnormal when the image data packet is the first packet, the two data structures of the normal and abnormal when the image data packet is the middle packet, the two data structures of the normal and abnormal when the image data packet is the last packet, and the data structures of the abnormal feedback packet (the abnormal feedback packet 1 and the abnormal feedback packet 2 as described above) in this embodiment, the following description will be made with reference to fig. 18a to 20 b.
Referring to fig. 18a and 18b, for the data structure of the first packet, the 0 frame position of the identification field is filled with "1", the 1 frame position is filled with "0", that is, when the packet identification information corresponding to the 0 to 1 frame is "10", it indicates that the current image data packet is the first packet. As can be seen from the above description, in the present embodiment, when the contracted state flag is "0", the image data packet is identified as a normal packet, and when the state flag is "1", the image data packet is identified as an abnormal packet.
Based on this, when the 3-frame position of the identification field is filled with "0" (as shown in fig. 18 a), it indicates that the current image data packet is a normal packet, that is, for the normal header packet, the content of the filling in the identification field is as shown in fig. 18a, and besides the actual image data, the position of 128 bytes is reserved in the data body for filling the data in the data header shown in fig. 14.
For example, when the 3 frame position of the identification field is filled with "1" (as shown in fig. 18 b), it indicates that the current image data packet is an exception packet, that is, for the first packet of an exception, the content of the filling in the identification field is as shown in fig. 18b, and besides the actual image data, the position of 128 bytes is reserved in the data body for filling the data in the data header shown in fig. 14.
Referring to fig. 19a and 19b, for the data structure of the tundish, the 0 frame position of the identification field is filled with "0", and the 1 frame position is filled with "0", that is, when the packet identification information corresponding to the 0 to 1 frame is "00", it means that the current image data packet is the tundish. As can be seen from the above description, in the present embodiment, when the contracted state flag is "0", the image data packet is identified as a normal packet, and when the state flag is "1", the image data packet is identified as an abnormal packet.
Based on this, when the 3-frame position of the identification field is filled with "0" (as shown in fig. 19 a), it means that the current image data packet is a normal packet, i.e., for a normal middle packet, the content of the filling in of the identification field thereof is as shown in fig. 19a, and each byte in the data body is entirely used to fill the actual image data.
Illustratively, when the 3-frame position of the identification field is filled with "1" (as shown in fig. 19 b), it indicates that the current image data packet is an exception packet, i.e., for an exception tundish, the contents of the identification field are filled in as shown in fig. 19b, and each byte in the data body is used entirely to fill the actual image data.
Referring to fig. 20a and 20b, for the data structure of the tail packet, the 0 frame position of the identification field is filled with "0", the 1 frame position is filled with "1", that is, when the packet identification information corresponding to the 0-1 frame is "01", it means that the current image data packet is the tail packet. As can be seen from the above description, in the present embodiment, when the contracted state flag is "0", the image data packet is identified as a normal packet, and when the state flag is "1", the image data packet is identified as an abnormal packet.
Based on this, when the 3-frame position of the identification field is filled with "0" (as shown in fig. 20 a), it means that the current image data packet is a normal packet, i.e., for a normal tail packet, the content of the filling in of the identification field thereof is as shown in fig. 20a, and each byte in the data body is entirely used to fill the actual image data.
Illustratively, when the 3-frame position of the identification field is filled with "1" (as shown in fig. 20 b), it indicates that the current image data packet is an exception packet, i.e., for the end packet of the exception, the contents of the identification field are filled in as shown in fig. 20b, and each byte in the data body is used entirely to fill the actual image data.
In addition, there may be a case where the remaining image data is insufficient for the tail packet, that is, all bytes in the data body cannot be filled, and in this case, bytes of the image data may not be filled in the remaining portion of the data body of the tail packet for a scene of a predetermined overall size of the image data packet.
For example, in the case of the remaining bytes in the data volume described above, the remaining bytes may be uniformly filled with default contents, such as "0".
It should be understood that the above description is only an example for better understanding of the technical solution of the present embodiment, and is not to be taken as the only limitation of the present embodiment.
In addition, it should be noted that, based on the data structure of the small image data packet provided in this embodiment, the above-mentioned camera sends an abnormal feedback packet 1 to the tablet when there is an abnormality, and the camera does not support the resolution carried in the photographing request and sends an abnormal feedback packet 2 to the tablet, which may also be packetized based on the above-mentioned packetizing principle.
It will be appreciated that since the exception feedback packets (exception feedback packet 1, exception feedback packet 2) are typically small, all data may be stored in one small packet after packetization using the above described packetization principle, and in this case, the packet identification information is "11". For the exception feedback packet, the data body has only 128 bytes, the data structure of the 128 bytes is similar to that of the data head shown in fig. 14, the error code for identifying the exception that the table lamp side cannot shoot due to the camera or the resolution is filled in the corresponding 0-3 bytes, and the 4-17 fields are expansion fields. I.e. the data structure of the exception feedback packet may be as shown in fig. 21.
It can be appreciated that since the abnormal feedback packet is essentially a normal packet, i.e. a packet that requires the flat panel to parse for the error code in the data body, rather than directly discarding the packet without processing, the tail "0" filled in the 2 frames for filling the status identification information in the identification field of the abnormal feedback packet.
S1012, the desk lamp sends N small image data packets.
Specifically, after packetizing the data in the image data packets according to the data structure to obtain N image data packets to be transmitted, the desk lamp may transmit the N image data packets through the image data channel negotiated with the tablet.
It will be appreciated that if it is determined during the negotiation phase that the transmitted data needs to be encrypted, each packet of image data needs to be encrypted according to the negotiated encryption mode, and then the encrypted packets of image data are sent to the tablet.
And S1013, the flat panel performs subsequent processing according to the content of the identification field in the currently received image data packet.
It can be understood that if the received image data packet is in encrypted form, the tablet needs to decrypt according to the decryption mode negotiated with the desk lamp, so as to obtain the image data packet in plaintext form. Then, by judging whether the content in the 0 frame in the 12-byte identification field is 1, and whether the content in the 1 frame is 0, namely, determining whether the packet identification information is agreed 10, further determining whether the currently received image data packet is an image data packet containing the first frame, namely, the first packet.
Accordingly, if so, the image data packet is buffered in the buffer queue, and then the above-described processing is performed on the received image data packet, if the packet is identified as a contracted "00", it is determined as a showing tundish, and then the above-described processing is performed on the received image data packet until the packet identification information of the received image data packet is contracted "01", or "11", it is buffered in the buffer queue, and the reception of the image data packet is stopped, and the packet-grouping operation is started.
For ease of understanding, the operations involved in the group package are described below in connection with fig. 18a to 21.
Taking 10000 bytes of image data as an example in the above example, for example, when performing a packet grouping operation, the tablet will take out the small image data packet 1 from the buffer queue, as shown in fig. 18a, in the 12-byte identification field, the recorded packet identification information is "10" surface image data packet 1 as the first packet, the data body of the packet includes the header of the entire image data packet shown in fig. 13, and the first byte of the header records the identification that identifies whether the image data is normal. Therefore, the data body of the small image data packet 1 can be analyzed, and the content recorded in the byte 13 can be extracted, so that the image data can be determined to be normal.
For example, when the flag recorded in the byte 13 is "1", it is known from the above description that "1" indicates that the image data is abnormal, in this case, the buffer queue may be directly emptied without parsing other image data packets, such as the image data packet 2 to the image data packet 8 described above.
For example, in some implementations, after the buffer queue is emptied, the tablet may make a prompt on the user interface to prompt the user that the image data photographed this time is abnormal and cannot be displayed, so that the user can trigger the photographing request again to obtain the image data.
For example, in other implementations, after the buffer queue is emptied, the tablet may also simulate a user operation and automatically generate a photographing request, so that the table lamp can re-execute the above procedure according to the new photographing request.
It should be understood that the above description is only an example for better understanding of the technical solution of the present embodiment, and is not to be taken as the only limitation of the present embodiment.
For example, when the flag recorded in the byte 13 is "0", it is known from the above description that "0" indicates that the image data is normal, in which case the other image data packets buffered in the buffer queue may be subjected to parsing processing.
For example, if the content in the 0 frame in the 12 bytes of the currently parsed image data packet is "0", and if the content in the 1 frame is "0", i.e., the packet identification information is "00" (as shown in fig. 19 a), this indicates that the image data packet is a tundish, and the content filled in the 2 frames in the 12 bytes is "0" (as shown in fig. 19 a), this indicates that the image data packet is normal, and the data body can be parsed, thereby obtaining the image data of the tundish.
For example, if the content in the 0 frame in the 12 bytes of the currently parsed image data packet is "0", and if the content in the 1 frame is "0", that is, the packet identification information is "00" (as shown in fig. 19 a), this indicates that the image data packet is a tundish, and the content filled in the 2 frames in the 12 bytes is "1" (as shown in fig. 19 b), this indicates that there is an anomaly in the image data packet, in order to avoid that the transmission anomaly causes the image data in the data body to be affected, in this case, the tundish and the subsequent image data packet need not be parsed, the buffer queue is directly emptied, and the spliced image data is deleted.
For example, if the content in the 0 frame in the 12 bytes of the currently parsed image data packet is "0", and if the content in the 1 frame is "1", i.e. the packet identification information is "01" (as shown in fig. 20 a), this indicates that the image data packet is a tail packet, and the content filled in the 2 frames in the 12 bytes is "0" (as shown in fig. 20 a), this indicates that the image data packet is normal, the data body may be parsed, and then the image data of the tail packet is obtained.
For example, if the content in the 0 frame in the 12 bytes of the currently parsed image data packet is "0", and if the content in the 1 frame is "1", that is, the packet identification information is "01" (as shown in fig. 20 a), this indicates that the image data packet is a tail packet, and the content filled in the 2 frames in the 12 bytes is "1" (as shown in fig. 20 b), this indicates that the image data packet has an anomaly, in order to avoid that the transmission anomaly causes the image data in the data body to be affected, in this case, the buffer queue is directly emptied without parsing the tail packet, and the spliced image data is deleted.
It should be understood that the above description is only an example for better understanding of the technical solution of the present embodiment, and is not to be taken as the only limitation of the present embodiment.
Therefore, according to the image data transmission method provided by the embodiment, the image data packets are packetized according to the data structure before transmission, so that a flat plate receiving the image data packets can quickly and accurately determine whether the current image data packet is a first packet, a middle packet or a tail packet according to the packet identification information of the identification field in each image data packet.
That is, according to the image data transmission method provided by the embodiment, the data body of the received image data packet is not required to be analyzed, so that the first packet, the middle packet and the tail packet of each photo to be restored can be rapidly and accurately determined, and the packet can be rapidly and accurately formed.
In addition, it can be understood that the foregoing describes the image data transmission method provided in the present application from the perspective of the two devices, i.e., the tablet and the desk lamp, and in specific implementation, the implementation of the foregoing flow needs to involve education applications on the tablet side, device management services, camera services, transmission management services, hardware virtualization services, virtual camera HAL, and device interconnection services on the desk lamp side, hardware abstraction services, and cameras, and for specific interactions between these modules, reference may be made to descriptions of word searching scenes and submitting job scenes, which are not repeated herein.
Further, considering that in the image data transmission scene, the image data packet may not be generated at all for transmission due to the hardware problem of the desk lamp side, such as that in the above embodiment, the camera is abnormal, or the required resolution is not supported, or the generated image data packet may be abnormal due to the transmission process, or the image data in the transmitted image data packet exceeds the maximum limit set on the tablet side, or the tablet side may not have a way to display the required image data due to the fact that the tablet does not synthesize the image data within the set timeout period due to various factors such as network reasons.
In view of this, the present application further provides an exception handling method in an image data transmission process, which aims to make a tablet learn in advance that image data requested this time cannot be displayed under the condition that a data body of a received data packet is not parsed, so that an image data channel can be rebuilt in time, and image data can be obtained again.
In this embodiment, the method for exception handling in the image data transmission process is specifically implemented by a network packet collection module, a network packet assembly module, a data processing module, and an image processing module in a hardware virtualization service on a tablet side.
The processing performed by the 4 functional modules is shown in fig. 22, for example.
Referring to fig. 22, when the desk lamp side packetizes the photographed image data based on the packetizing standard, and transmits the packetized image data to the tablet side through the image data channel negotiated with the tablet, the network packet collecting module integrated in the hardware virtualization service on the tablet side collects and buffers the image data packets transmitted from the desk lamp side into the buffer queue.
With continued reference to fig. 22, after the network packet collecting module collects the image data packets, the collected image data packets are reported to the network packet assembling module, the network packet assembling module analyzes the image data packets, extracts actual image data carried in the data body, splices the image data in all the image data packets, sends the spliced image data to the data processing module, adds other necessary data such as identification information and the like to the data processing module, generates an access unit (accesssunit), sends the generated access unit to the image processing module, and then the image processing module performs graphic processing on the access unit according to configuration options, so that a picture capable of being displayed in a user interface can be finally obtained.
Details of specific information included in the accessinit and specific implementation of the accessinit processing into the visual image may be referred to the existing standard, and will not be described herein.
The following describes the abnormality processing method of the image data transmission process provided in the present embodiment, for the above-mentioned 4 kinds of abnormality cases, respectively.
Table lamp side exception handling:
referring to fig. 23, for example, when an abnormality occurs in the camera on the desk lamp side or the camera does not support the resolution carried in the photographing request, the desk lamp generates an abnormality feedback packet based on the above-given sub-packaging standard.
Illustratively, in this embodiment, the generated abnormal feedback packet specifically includes an RTP header, an identification field, and a data body based on the above-mentioned packetization standard. The format of the RTP header is shown in fig. 16, the structures of the identification field and the data body are shown in fig. 21, and the description thereof is omitted here.
In addition, in order to enable the tablet side to learn the cause of the abnormality in each image data transmission process, different error codes may be predetermined to indicate the corresponding cause of the abnormality. In this embodiment, the error code of "-299999990" is used to indicate an abnormality caused by a fault encountered on the desk lamp side, that is, when the desk lamp side cannot shoot image data due to a camera, resolution, etc., or the shot image data itself is abnormal, the error code of "-299999990" is filled in 0-4 bytes in a data header of 128 bytes reserved in a data body in the generated abnormal feedback packet.
In addition, it should be noted that, both the abnormal feedback packet sent by the desk lamp and the image data packet can be regarded as a network packet for the tablet side, so that the network packet to the tablet side is buffered to the buffer queue by the network packet collecting module and reported to the network packet assembling module. That is, the abnormal feedback packet with the above structure sent by the desk lamp side is buffered by the network packet collecting module, and the abnormal feedback packet is reported to the network packet assembling module, as shown in fig. 23.
After the network packet assembly module receives the network packet (in this embodiment, the abnormal feedback packet is substantially) reported by the network packet collection module, the network packet assembly module extracts the identification information in the packet identification field.
As shown in fig. 23, when the packet extracted from the network packet by the network packet assembling module is identified as "11", it indicates that the packet is an abnormal feedback packet, and at this time, the data body of the packet needs to be parsed, and the error codes filled in 0-3 bytes in the data body are extracted.
Because the abnormality is caused by the fault of the desk lamp side, the extracted error code is "-299999990", and in the case of the abnormality, the data processing module and the image processing module cannot participate in processing, and the network packet assembly module can directly report the error code across threads.
It can be appreciated that in practical application, the network packet collecting module, the network packet assembling module, the data processing module and the image processing module can be all responsible for by independent threads, so that asynchronous operation can be realized, and the four functional modules do not interfere with each other. Therefore, the above-mentioned cross-thread reporting error codes, namely cross data processing model and image processing module, directly report the error codes to upper layer module/application.
For example, in some implementations, the error code may be displayed in a user interface corresponding to the education application, or the description of the reason for the error code, such as that the camera or the resolution is abnormal, please detect the camera, reset the resolution, etc., may be directly reported to the education application of the application layer.
For example, in other implementations, the resolution carried in the regenerated photographing request may be adjusted to be suitable for the camera of the current desk lamp, which may also be directly reported to the virtual camera, so as to automatically trigger and call the virtual camera to send the photographing request to the desk lamp.
It should be understood that the above description is only an example for better understanding of the technical solution of the present embodiment, and is not to be taken as the only limitation of the present embodiment.
Exception handling in the table lamp sending flow:
referring to fig. 24, for example, when the camera at the desk lamp side is normal, the image data is shot according to the resolution carried in the shooting request sent by the tablet, and the packetizing is performed according to the packetizing standard, so as to obtain a plurality of small image data packets, the desk lamp sequentially transmits the first packet, the middle packet and the last packet in the small image data packets to the tablet through the image data channels negotiated in advance.
With continued reference to fig. 24, when the first packet sent by the desk lamp reaches the flat side, the first packet is buffered by the network packet collecting module, and the network packet (substantially the first packet) is reported to the network packet assembling module.
Correspondingly, the network packet assembling module receives the network packet reported by the network packet collecting module), the identifying information in the packet identifying field is extracted.
As shown in fig. 24, when the packet identifier extracted from the network packet by the network packet assembling module is "10", the packet is indicated as the first packet, then it is determined whether the status identifier is "0", if "0" indicates that the packet is normal, at this time, the data body of the packet needs to be parsed, and the image data filled in each byte is extracted backward from the 128 bytes of the data body, so as to obtain the actual image data carried in the first packet.
It can be understood that, for each image data packet sent by the desk lamp side, the network packet collecting module executes the same operation, i.e. the network packet is received for buffering and reported to the network packet assembling module. And the network packet assembling module extracts packet identification information and judges the state of each network packet according to the processing logic, so as to analyze the data body of the network packet when the network packet is a normal packet and extract actual image data from the corresponding position. However, if the table lamp subsequently transmits the remaining image data packets, the middle packet and the tail packet after the above-mentioned hand inspection, if a hardware fault or a network fault is encountered, the content in the status identification frame in the identification field in the image data packet transmitted during the fault period is filled with "1" so as to inform the tablet that the fault occurs currently, and all other image data packets related to the image data packet need to be discarded. For convenience of explanation, this embodiment takes the foregoing abnormality occurring in the process of transmitting the tundish as an example.
With continued reference to fig. 24, when the packet identifier extracted from the received network packet is "00", the packet is indicated as a middle packet, then it is determined whether the status identifier is "0", and since the table lamp fails during the transmission process, the status identifier is modified to "1", so that the extracted status identifier is "1", which indicates that the packet is abnormal, at this time, the network packet assembly module may condition the spliced image data, that is, the image data extracted from the first packet, report the cause of the abnormality in a cross-process manner, and notify the network packet collection module to empty the buffer.
Furthermore, it should be noted that in some implementations, the tablet may continue to receive network packets transmitted on the desk lamp side before responding according to the current anomaly. For this case, the network packet collecting module continues to buffer and reports the buffered network packet to the network packet assembling module, and the network packet assembling module can detect whether the packet carries the first packet identifier, namely "10", if not, it is directly discarded, and notifies the network packet collecting module to empty the buffer, if there is the first packet identifier, it resumes collecting the network packet, and analyzes and extracts the image data to splice according to the above processing logic, so that after all the image data are spliced, the spliced image data are reported to the data processing module to process to obtain the access unit, and finally the image processing module performs graphic processing on the access unit to obtain the image that can be displayed on the user interface.
Plate pack collection exceeds maximum exception handling:
referring to fig. 25, an exemplary table lamp continuously transmits small packets of image data, such as a head packet, a middle packet 1, a middle packet 2, a tail packet, etc., to a flat panel. The network packet collecting module continuously caches the received network packets and reports the cached network packets to the network packet assembling module.
Accordingly, if the network packet assembling module determines that the first packet and the middle packet 1 are normal packets according to the processing logic given in the above embodiment, the data bodies of the first packet and the middle packet 1 are parsed and the actual image data are spliced, after the spliced image data 1 is obtained, if the middle packet 2 is received and the spliced image data 1 is already greater than the set threshold, or if the spliced image data 1 is spliced and the image data carried in the middle packet 2 is greater than the set threshold, this indicates that the size of the image data in the image data packet received by the tablet exceeds the set maximum limit, even if the image data carried in the subsequent image data packet is spliced with the spliced image data 1, the finally obtained image data cannot be normally displayed in the user interface of the tablet, so when such an abnormality occurs, the network packet assembling module does not parse and extract the image data in the middle packet 2 and the later middle packet, but marks the middle packet 2 and the later as an error state, and discards the middle packet.
With continued reference to fig. 25, after the network packet assembling module completes the above operation, the network packet assembling module does not report abnormality across threads, and does not notify the network packet collecting module to empty the buffer, but continues to wait for the network packet reported by the network packet collecting module, when determining that the packet is a tail packet or a head packet according to the packet identifier in the received network packet, the network packet assembling module deletes the spliced image data 1 which has been spliced, and reports abnormality reasons across threads, specifically, the received packet exceeds the maximum limit, and notifies the network packet collecting module to empty the buffer.
It can be understood that, since the triggering may be that the tail packet is received or the first packet is received, when the network packet collecting module is notified to empty the buffer, for the case of the tail packet, all network packets buffered in the buffer queue may be cleared, and for the case of the first packet, all network packets except the first packet in the buffer queue may be cleared.
Plate pack-collecting exceeding time limit exception handling:
in order to avoid the situation that the tablet board cannot receive needed image data for a long time after sending a photographing request and falls into long-time waiting, the tablet board resources are occupied. Typically, the tablet side sets a timeout period, and after the timeout period is exceeded, if no displayable image data is obtained, a timeout indication is made. For such an abnormal scene, after determining that the time is out, the image processing module actively issues a destroy instruction to the data processing module, so that the data processing module can automatically execute destroy operation according to the destroy instruction, and meanwhile, the data processing module also sends the destroy instruction to the desk lamp side, so that the desk lamp destroys an image data channel between the desk lamp and the flat plate, and stops sending image data packets to the desk lamp continuously.
For this implementation of the exception handling, referring to fig. 26, for example, after the tablet side sends a photographing request with resolution to the desk lamp by calling the virtual camera, the image processing module automatically starts a timer. In the process of timing after the timer is started, the network packet collecting module caches the received network packets according to the processing logic and reports the received network packets to the network packet assembling module, and meanwhile, the network packet assembling module recognizes and judges according to the processing logic so as to analyze and splice the image data.
With continued reference to fig. 26, the network packet collecting module and the network packet assembling module repeatedly execute the above processing, and if the time period counted by the timer started by the image processing module has reached the set timeout period, the data processing module will actively issue a destroy instruction to the data processing module, so that the data processing module can automatically execute destroy operation according to the destroy instruction, and meanwhile, the data processing module will also send the destroy instruction to the desk lamp side, so that the desk lamp destroys the image data channel between the desk lamp and the tablet, and stops sending the image data packets to the desk lamp continuously.
Therefore, before the user re-triggers the photographing request, the image data channel on the desk lamp side can be interrupted in advance, and the image processing module, the data processing module, the network packet assembling module and the network packet collecting module are destroyed, so that the resource occupation of the tablet is reduced.
It will also be appreciated that the electronic device, in order to achieve the above-described functionality, comprises corresponding hardware and/or software modules that perform the respective functions. The steps of an algorithm for each example described in connection with the embodiments disclosed herein may be embodied in hardware or a combination of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application in conjunction with the embodiments, but such implementation is not to be considered as outside the scope of this application.
In addition, it should be noted that, in an actual application scenario, the method provided in each of the foregoing embodiments implemented by the electronic device may also be performed by a chip system included in the electronic device, where the chip system may include a processor. The chip system may be coupled to a memory such that the chip system, when running, invokes a computer program stored in the memory, implementing the steps performed by the electronic device described above. The processor in the chip system can be an application processor or a non-application processor.
In addition, the embodiment of the application further provides a computer readable storage medium, where computer instructions are stored, which when executed on an electronic device, cause the electronic device to execute the related method steps to implement the method in each embodiment.
In addition, the embodiments of the present application further provide a computer program product, which when executed on an electronic device, causes the electronic device to perform the above-mentioned related steps to implement the method in the above-mentioned embodiments.
In addition, embodiments of the present application also provide a chip (which may also be a component or module) that may include one or more processing circuits and one or more transceiver pins; wherein the transceiver pin and the processing circuit communicate with each other through an internal connection path, and the processing circuit executes the related method steps to implement the method in the above embodiment, so as to control the receiving pin to receive signals, and control the transmitting pin to transmit signals.
In addition, as can be seen from the foregoing description, the electronic device, the computer-readable storage medium, the computer program product, or the chip provided in the embodiments of the present application are used to perform the corresponding methods provided above, and therefore, the advantages achieved by the method can refer to the advantages in the corresponding methods provided above, which are not repeated herein.
The above embodiments are merely for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.

Claims (16)

1. An abnormality processing method of an image data transmission process, which is applied to a first electronic device, the method comprising:
receiving a network small packet sent by second electronic equipment, wherein the network small packet carries packet identification information and state identification information;
when the packet identification information indicates that the network packet is an abnormal feedback packet, analyzing a data body of the abnormal feedback packet, extracting an error code carried in the abnormal feedback packet, and reporting the error code to the target application;
when the packet identification information indicates that the network packet is an image data packet, determining whether the image data packet is normal or not according to the state identification information;
And deleting the analyzed image data when the image data packet is abnormal according to the state identification information, emptying the cache, and reporting the cause of the abnormality to the target application.
2. The method of claim 1, wherein the network packet comprises a data header, an identification field, and a data body;
the data header occupies 12 bytes, and the packet identification field occupies 1 byte;
the content of binary data corresponding to the 0 th frame and the 1 st frame in the identification field is the packet identification information, and the binary data corresponding to the 3 rd frame in the identification field is the state identification information;
the data body is used for storing the image data or the error code.
3. The method of claim 2, wherein the data body occupies 128 bytes when the packet identification information indicates that the network packet is the abnormal feedback packet;
the error code is filled in 0-3 bytes of the data body.
4. The method of claim 3, wherein when the packet identification information indicates that the network packet is an abnormal feedback packet, parsing a data body of the abnormal feedback packet, and extracting an error code carried in the abnormal feedback packet, includes:
And analyzing the data body of the abnormal feedback packet, and extracting the error code from 0-3 bytes of the data body.
5. The method of claim 2, wherein the data body comprises a first data portion and a second data portion when the packet identification information indicates that the network packet is an image data packet and the image data packet is a header packet;
the first data part is a reserved field and takes 128 bytes for carrying expansion information;
the second data section is used for filling the image data.
6. The method of claim 5, wherein the image data packet is a header packet;
the method further comprises the steps of:
and when the image data packet is determined to be normal according to the state identification information, analyzing the image data packet, and extracting the image data from the second data part.
7. The method of claim 2, wherein the data body fills the image data from 0 bytes when the packet identification information indicates that the network packet is an image data packet and the image data packet is a middle packet or a tail packet.
8. The method of claim 7, wherein the image data packet is a tundish or a trailer;
The method further comprises the steps of:
and when the image data packet is determined to be normal according to the state identification information, analyzing the image data packet, and extracting the image data from the 0 bytes of the data body.
9. The method according to claim 6 or 8, characterized in that the method further comprises:
when a new tundish is received, judging whether the image data which is already extracted is larger than a set threshold value or not;
when the extracted image data is larger than the set threshold value, performing overrun marking on the new tundish, and discarding the new tundish;
and deleting the extracted image data when receiving a tail packet or a new head packet, emptying a cache, and reporting an abnormality reason to the target application.
10. The method of claim 9, wherein deleting the image data that has been extracted and clearing the buffer upon receiving a tail packet or a new head packet, comprises:
deleting the extracted image data when receiving the tail packet, and emptying all cached image data packets;
and deleting the extracted image data when a new first packet is received, and emptying all cached small image data packets except the new first packet.
11. The method according to any one of claims 1 to 8, further comprising:
starting a timer when a photographing request is sent to the second electronic equipment;
receiving the network small packet sent by the second electronic equipment in the timing process of the timer, and processing the network small packet according to the packet identification information and the state identification information in the network small packet;
and in the process of processing the network small packet, when the time counted by the timer reaches the set timeout time, destroying a thread for processing the network small packet, and sending a destroying instruction to the second electronic equipment so as to destroy an image data channel between the second electronic equipment and the first electronic equipment.
12. An electronic device, comprising:
a memory and a processor, the memory coupled with the processor;
the memory stores program instructions that, when executed by the processor, cause the electronic device to perform the abnormality processing method of the image data transmission process of any one of claims 1 to 11.
13. A cooperative work system, comprising: a first electronic device for executing the abnormality processing method of the image data transmission process according to any one of claims 1 to 11, and a second electronic device on which a camera for collecting image data is provided, a target application in the first electronic device being bound to the second electronic device;
The first electronic device is configured to: registering a virtual camera corresponding to the camera in a system, and sending a photographing request to the second electronic device by calling the virtual camera;
the second electronic device is configured to: and calling the camera to shoot image data according to the shooting request of the first electronic equipment, and sending the image data to the target application of the first electronic equipment for preview display.
14. The system of claim 13, the second electronic device being an internet of things device.
15. The system of claim 14, the internet of things device being a desk lamp, the camera being configured to collect image data downwardly.
16. A computer-readable storage medium comprising a computer program, characterized in that the computer program, when run on an electronic device, causes the electronic device to execute the abnormality processing method of the image data transmission process according to any one of claims 1 to 11.
CN202210859609.9A 2022-07-21 2022-07-21 Abnormality processing method, device and cooperative work system for image data transmission process Pending CN117478654A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210859609.9A CN117478654A (en) 2022-07-21 2022-07-21 Abnormality processing method, device and cooperative work system for image data transmission process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210859609.9A CN117478654A (en) 2022-07-21 2022-07-21 Abnormality processing method, device and cooperative work system for image data transmission process

Publications (1)

Publication Number Publication Date
CN117478654A true CN117478654A (en) 2024-01-30

Family

ID=89629808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210859609.9A Pending CN117478654A (en) 2022-07-21 2022-07-21 Abnormality processing method, device and cooperative work system for image data transmission process

Country Status (1)

Country Link
CN (1) CN117478654A (en)

Similar Documents

Publication Publication Date Title
KR100890236B1 (en) A method for capturing video data by utilizing a camera cell phone as a camera of a computer
CN112398855B (en) Method and device for transferring application contents across devices and electronic device
EP4224981A1 (en) Method for using cellular communication function, related apparatus, and system
CN113691842B (en) Cross-device content projection method and electronic device
CN114040242B (en) Screen projection method, electronic equipment and storage medium
CN112527174B (en) Information processing method and electronic equipment
CN114185503B (en) Multi-screen interaction system, method, device and medium
CN114845035B (en) Distributed shooting method, electronic equipment and medium
CN115550597A (en) Shooting method, system and electronic equipment
CN111615820B (en) Method and equipment for performing domain name resolution by sending key value to GRS server
JP2021532653A (en) Data transmission methods and electronic devices
WO2022063159A1 (en) File transmission method and related device
CN116056076B (en) Communication system, method and electronic equipment
WO2022135273A1 (en) Method for invoking capabilities of other devices, electronic device, and system
US20230289432A1 (en) Application Data Transmission Method, User Equipment, and System
CN117478654A (en) Abnormality processing method, device and cooperative work system for image data transmission process
CN117478653A (en) Image data transmission method, device and cooperative work system
CN117478656A (en) Image data transmission method, device and cooperative work system
CN117472603A (en) Data transmission method, electronic equipment and cooperative work system
CN117499445A (en) Collaborative work system, collaborative work method and electronic equipment
CN117499446A (en) Collaborative work system, collaborative work method and electronic equipment
CN117478682A (en) Method, equipment and cooperative work system for establishing point-to-point channel
CN117499780A (en) Photographing method, electronic equipment and collaborative work system
CN116366957B (en) Virtualized camera enabling method, electronic equipment and cooperative work system
CN117499781A (en) Photographing method, internet of things equipment and collaborative work system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination