CN111080726A - Picture transmission method and equipment - Google Patents

Picture transmission method and equipment Download PDF

Info

Publication number
CN111080726A
CN111080726A CN201910494959.8A CN201910494959A CN111080726A CN 111080726 A CN111080726 A CN 111080726A CN 201910494959 A CN201910494959 A CN 201910494959A CN 111080726 A CN111080726 A CN 111080726A
Authority
CN
China
Prior art keywords
picture
content
module
user
raster data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910494959.8A
Other languages
Chinese (zh)
Other versions
CN111080726B (en
Inventor
郑洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TCL China Star Optoelectronics Technology Co Ltd
Original Assignee
Shenzhen China Star Optoelectronics Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen China Star Optoelectronics Technology Co Ltd filed Critical Shenzhen China Star Optoelectronics Technology Co Ltd
Priority to CN201910494959.8A priority Critical patent/CN111080726B/en
Publication of CN111080726A publication Critical patent/CN111080726A/en
Application granted granted Critical
Publication of CN111080726B publication Critical patent/CN111080726B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals

Abstract

The embodiment of the invention discloses a picture transmission method and picture transmission equipment, which are applied to the technical field of data transmission and can solve the problem that the picture transmission is time-consuming. The method is applied to a first device and comprises the following steps: rasterizing a first picture to be transmitted to obtain first raster data; encoding the first raster data to obtain first encoded data; the first encoded data is transmitted to a second device. The method is applied to the scene of transmitting the picture.

Description

Picture transmission method and equipment
Technical Field
The embodiment of the invention relates to the technical field of data transmission, in particular to a picture transmission method and picture transmission equipment.
Background
At present, in the process of uploading a picture to a server by a terminal device and downloading the picture from the server by the terminal device, in order to enable the picture to be transmitted quickly, the picture is usually compressed before the picture is transmitted, and then the compressed picture is transmitted. However, in order to ensure the identifiability of the picture, the data size of the picture cannot be compressed to be small, so that the conventional picture transmission method still has the problem that the transmission is time-consuming.
Disclosure of Invention
The embodiment of the invention provides a picture transmission method and picture transmission equipment, which are used for solving the problem that the time for transmitting pictures is long in the prior art. In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, a method for transmitting pictures is provided, where the method includes:
rasterizing a first picture to be transmitted to obtain first raster data;
encoding the first raster data to obtain first encoded data;
transmitting the first encoded data to a second device.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the method further includes:
receiving second coded data sent by second equipment;
decoding the second coded data to obtain second raster data;
and converting the second raster data into a picture to obtain a second picture.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the first device is a terminal device, and the second device is a server, or the first device is a server and the second device is a terminal device.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after the converting the second raster data into a picture to obtain a second picture, the method further includes:
detecting that a user reads the second picture in a reading mode;
identifying first content in the second picture;
acquiring second content matched with the first content in a database;
and reading the second content.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after obtaining, in the database, the second content that matches the first content, the method further includes:
displaying the second content, and displaying target content associated with the second content in the form of thumbnail images.
In a second aspect, there is provided an apparatus, which is a first apparatus, the first apparatus comprising:
the processing module is used for carrying out rasterization processing on a first picture to be transmitted so as to obtain first raster data;
the encoding module is used for encoding the first raster data to obtain first encoded data;
and the transmission module is used for transmitting the first coded data to second equipment.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the first device further includes:
the receiving module is used for receiving second coded data sent by second equipment;
the decoding module is used for decoding the second coded data to obtain second raster data;
and the conversion module is used for converting the second raster data into a picture so as to obtain a second picture. As an optional implementation manner, in the second aspect of the embodiment of the present invention, the first device is a terminal device, and the second device is a server, or the first device is a server and the second device is a terminal device.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the first device further includes:
the detection module is used for converting the second raster data into a picture so as to obtain a second picture, and then detecting that a user reads the second picture in a reading mode;
the identification module is used for identifying first content in the second picture;
the acquisition module is used for acquiring second content matched with the first content in a database;
and the reading module is used for reading the second content.
As an optional implementation manner, in the second aspect of the embodiment of the present invention, the first device further includes:
and the display module is used for displaying the second content matched with the first content after the second content is obtained from the database, and displaying the target content associated with the second content in a thumbnail mode.
In a third aspect, an apparatus is provided, where the apparatus is a first apparatus, and the first apparatus includes:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the picture transmission method in the first aspect of the embodiment of the present invention.
In a fourth aspect, a computer-readable storage medium is provided, which stores a computer program that causes a computer to execute the picture transmission method in the first aspect of the embodiment of the present invention. The computer readable storage medium includes a ROM/RAM, a magnetic or optical disk, or the like.
In a fifth aspect, there is provided a computer program product for causing a computer to perform some or all of the steps of any one of the methods of the first aspect when the computer program product is run on the computer.
A sixth aspect provides an application publishing platform for publishing a computer program product, wherein the computer program product, when run on a computer, causes the computer to perform some or all of the steps of any one of the methods of the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in this embodiment of the present invention, the first device may perform rasterization processing on a first picture to be transmitted to obtain first raster data, encode the first raster data to obtain first encoded data, and transmit the first encoded data to the second device. By the scheme, the first device can perform rasterization processing on the picture, encode the processed raster data and transmit the encoded raster data to the second device, and the data volume of the picture after rasterization is greatly reduced, so that the picture transmission efficiency can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a first flowchart illustrating a picture transmission method according to an embodiment of the present invention;
fig. 2 is a second flowchart illustrating a picture transmission method according to an embodiment of the present invention;
fig. 3 is a third schematic flowchart of a picture transmission method according to an embodiment of the present invention;
fig. 4 is a fourth schematic flowchart of a picture transmission method according to an embodiment of the present invention;
fig. 5 is a first schematic structural diagram of a first device according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a first device according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram three of a first device according to an embodiment of the present invention;
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first" and "second," and the like, in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first device and the second device, etc. are for distinguishing different devices, and are not for describing a particular order of the devices.
The terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
The embodiment of the invention provides a picture transmission method and picture transmission equipment, which can improve the picture transmission efficiency.
The equipment related to the embodiment of the invention can be Mobile phones, tablet computers, point reading machines, home education machines, notebook computers, palm computers, vehicle-mounted terminal equipment, wearable equipment, Ultra-Mobile Personal computers (UMPCs), netbooks or Personal Digital Assistants (PDAs) and other terminal equipment; the device related to the embodiment of the invention can also be a server.
The execution main body of the image transmission method provided by the embodiment of the present invention may be a terminal device or a server, or may also be a functional module and/or a functional entity capable of implementing the image transmission method in the terminal device or the server, which may be specifically determined according to actual use requirements, and the embodiment of the present invention is not limited. The execution subject of the picture transmission method is referred to as a first device below, and the picture transmission method provided by the embodiment of the invention is exemplarily described.
Example one
As shown in fig. 1, an embodiment of the present invention provides a picture transmission method, where the method is applied to a first device, and the first device may perform the following steps:
101. the first device performs rasterization processing on a first picture to be transmitted to obtain first raster data.
The first device is a terminal device and the second device is a server, or the first device is a server and the second device is a terminal device.
102. The first device encodes the first raster data to obtain first encoded data.
103. The first device transmits the first encoded data to the second device.
For example, assuming that the first device is a terminal device and the second device is a server, the above 101 to 103 may be a process of uploading a first picture to the server for the terminal device. Assuming that the first device is a server and the second device is a terminal device, the above 101 to 103 may be a process of downloading the first picture from the server for the terminal device.
Accordingly, after the first device transmits the first encoded data to the second device, the second device may receive the first encoded data, decode the first encoded data to obtain first raster data, and convert the first raster data into a picture to obtain a first picture.
In the picture transmission method provided by the embodiment of the present invention, the first device may perform rasterization on a first picture to be transmitted to obtain first raster data, encode the first raster data to obtain first encoded data, and transmit the first encoded data to the second device. By the scheme, the first device can perform rasterization processing on the picture, encode the processed raster data and transmit the encoded raster data to the second device, and the data volume of the picture after rasterization is greatly reduced, so that the picture transmission efficiency can be improved.
Example two
As shown in fig. 2, an embodiment of the present invention provides a picture transmission method, which is applied to a first device, where the first device may perform the following steps:
201. the first device performs rasterization processing on a first picture to be transmitted to obtain first raster data.
202. The first device encodes the first raster data to obtain first encoded data.
203. The first device transmits the first encoded data to the second device.
For the descriptions 201 to 203, reference may be made to the descriptions of 101 to 103 in the first embodiment, which are not described herein again.
After 203, the method for transmitting a picture according to the embodiment of the present invention may further include 204 to 206.
204. The first device receives the second coded data sent by the second device.
205. The first device decodes the second encoded data to obtain second raster data.
206. And the first equipment converts the second raster data into a picture to obtain a second picture.
For example, assuming that the first device is a terminal device and the second device is a server, the 201 to 203 may be a process in which the terminal device uploads a first picture to the server, and the 204 to 206 may be a process in which the terminal device downloads a second picture from the server. Assuming that the first device is a server and the second device is a terminal device, 201 to 203 may be a process in which the terminal device downloads a first picture from the server, and 204 to 206 may be a process in which the terminal device uploads a second picture to the server.
In the embodiment of the present invention, the first device may send the rasterized and encoded picture to the second device, and the first device may also receive the encoded data sent by the second device, decode the data to obtain raster data, and convert the raster data into the picture, so that the first device may not only improve the rate of sending the picture, but also improve the rate of receiving the picture.
EXAMPLE III
As shown in fig. 3, an embodiment of the present invention provides a picture transmission method, where the method is applied to a first device, and in this embodiment, the first device may be a terminal device with a point-to-read function, and the first device may perform the following steps:
301. the first device performs rasterization processing on a first picture to be transmitted to obtain first raster data.
302. The first device encodes the first raster data to obtain first encoded data.
303. The first device transmits the first encoded data to the second device.
304. The first device receives the second coded data sent by the second device.
205. The first device decodes the second encoded data to obtain second raster data.
306. And the first equipment converts the second raster data into a picture to obtain a second picture.
For the descriptions 301 to 306, reference may be made to the descriptions related to 201 to 206 in the second embodiment, and the descriptions are omitted here.
The first device may display the second picture after obtaining the second picture and perform 307 described below.
307. And the first equipment detects that the user reads the second picture in a reading mode.
Optionally, the first device may detect a click-to-read region of the user on the display screen, determine whether the user clicks the second picture according to a position of the click-to-read region, and identify content in the second picture when it is detected that the user clicks the second picture.
308. The first device identifies first content in the second picture.
Optionally, the terminal device may identify the first content in the first image by using an image identification technology.
Optionally, the first content in the embodiment of the present invention may be a part of or all of the content in the first image.
309. The first device obtains second content matching the first content in the database.
The second content matched with the first content may be the same content in the database as the first content, or the correlation degree with the first content is greater than the correlation degree threshold.
310. The first device reads the second content.
Optionally, the following 311 may be performed after the above 309.
311. The first device displays the second content, and displays target content associated with the second content in the form of thumbnails.
In this embodiment, the first device may identify the first content in the actual click-to-read region, display the first content, and report the first content, and the first device may not only report the second content, but also display the second content, so that user experience of a user when the user reads the content may be improved.
Alternatively, the second content may be content of a target page of a certain book, and the target content associated with the second content may be content of a page adjacent to the first target page of the certain book (for example, content of a page preceding the target page and content of a page succeeding the target page); the second content may be content in a certain version of a certain book, and the second content may be corresponding content in other versions of the certain book.
In the embodiment of the invention, the user can intuitively see the second content by displaying the second content and displaying the target content associated with the second content in the form of the thumbnail.
Furthermore, when the user wants to view the content related to the second content, the first device can be triggered to enlarge and display the related content through touch input of the user on the thumbnail, so that the user can conveniently view the target content related to the second content.
As an optional implementation manner, the above 307 may specifically be that, in the click-to-read mode, the first device detects that the user's finger clicks to read the second picture.
After the first device detects that the second picture is read by the finger of the user, fingerprint information of the finger of the user can be acquired; and under the condition that the acquired fingerprint information of the finger of the user is matched with the preset fingerprint information, acquiring a first preset voiceprint characteristic bound with the preset fingerprint information, and after executing the steps 308 and 309, the first device can use the first preset voiceprint characteristic to read the second content.
The preset fingerprint information and the first preset voiceprint feature can be bound for the user in advance and stored in the first device.
In the above optional implementation manner, when the user reads the second picture by pointing with the finger, the first device may acquire the fingerprint information of the finger of the user, acquire the preset voiceprint feature bound with the preset fingerprint information under the condition that the acquired fingerprint information of the finger of the user matches with the preset fingerprint information, and report the second content by using the preset voiceprint feature, so that the first device may report the content read by pointing with the prestored voiceprint feature, thereby making the content more personalized and improving the user experience.
Further, if the preset fingerprint information is bound with a plurality of preset voiceprint features, the first device can acquire the fingerprint information of the user finger after detecting that the user finger reads the second picture; the method comprises the steps that under the condition that the acquired fingerprint information of a finger of a user is matched with preset fingerprint information, a plurality of preset voiceprint features bound with the preset fingerprint information are acquired, a plurality of selection marks are displayed, each selection mark is used for indicating one preset voiceprint feature, after the plurality of selection marks are displayed, touch input of the user to a first selection mark can be received by first equipment, and in response to the touch input, the first equipment adopts the first preset voiceprint features to read second content.
The first selection mark is a mark used for indicating a first preset voiceprint feature in the plurality of selection marks.
In the optional implementation manner, if the preset fingerprint information is bound with a plurality of preset voiceprint features, the first device can display a plurality of selection identifiers used for indicating the plurality of preset voiceprint features bound with the preset fingerprint information, and can respond to the touch operation of the user on the selection identifiers, the preset voiceprint features indicated by the selection identifiers are adopted for reading the second content, so that the user can select the voiceprint features during reading, and the sound of reading during reading the content is humanized.
An optional implementation manner, after 311, the method provided in the embodiment of the present invention may further include: the method comprises the steps that the first equipment detects whether the sight of a user falls on a display screen of the first equipment; if the sight of the user does not fall on the display screen of the first device, the first device controls the display screen to enter a standby state (namely, the display screen is in a screen-off state).
Optionally, the first device detects whether the eye sight of the user falls on the display screen of the first device, including:
the method comprises the steps that first equipment collects facial image information of a user, determines the direction of sight of the user according to the facial information of the user, judges whether the sight of the user falls on a display screen of the first equipment or not according to the direction of the sight of the user, and determines that the sight of the user falls on the display screen of the first equipment if the sight of the user falls on the display screen of the first equipment; otherwise, if the user does not fall on the display screen of the first device according to the direction of the sight of the user.
Through the optional implementation manner, the first device can detect whether the sight of the user falls on the display screen of the first device, and can control the display screen to be in a standby state under the condition that the sight of the user does not fall on the display screen of the first device, so that the power consumption of the display screen can be reduced.
As an alternative implementation manner, after controlling the display screen to enter the standby state, the first device may further detect a line of sight of the user, and after detecting that the line of sight of the user falls on the display screen, the first device switches the display screen from the standby state to the operating state (i.e., the display screen is in a lighted state, so that the user can see contents displayed in the display screen).
Through the optional implementation mode, the first device can switch the display screen from the standby state to the working state after controlling the display screen to enter the standby state and detecting that the sight of the user falls on the display screen again, so that the display screen can be lightened timely, and the user can conveniently check the content in the display screen.
As an optional implementation manner, the step 301 may specifically be implemented by the following steps:
301a, the first device detects that the environment of the area where the first device is located is noisy.
The first device may receive the audio signal via a receiver in the first device and detect the ambient noisiness of the area in which the first device is located based on the audio signal.
301b, first equipment when the noisy degree of environment is greater than preset noisy degree, the output is used for instructing the user to wear the prompt message of earphone.
301c, when detecting that the first device is successfully connected with the earphone, the first device performs a reading operation on the second content with a preset volume.
Through the optional implementation mode, the user can be prompted to wear the earphone through detecting the noise degree of the environment where the first device is located, so that the reading effect is guaranteed.
As an alternative implementation, the way for the first device to detect the ambient noise level of the area where the first device is located may be:
the first equipment detects whether a certain wireless access point is accessed currently, and if the certain wireless access point is accessed currently, the first equipment identifies whether the identification information of the wireless access point accessed currently is matched with the identification information of the wireless access point on a certain school bus, which is recorded in advance by the first equipment.
If the information is matched with the information, the first device can consider that the first device is currently located on the school bus, correspondingly, the first device can obtain the identity information of the service device transferred on the school bus, and sends a request message to the service device of the school bus through the wireless access point according to the identity information of the service device on the school bus, wherein the request message carries the identity information of the first device and a request field, and the request field is used for requesting the service device of the school bus to detect the environment noise degree in the school bus.
And the first device acquires the environment noise degree in the school bus, which is sent by the service device of the school bus in response to the request message, and takes the environment noise degree in the school bus as the environment noise degree of the area where the first device is located.
The implementation of the above embodiment can avoid power consumption caused when the first device detects the noisy environment in the region where the first device is located by starting the sensor of the first device, and reduce the heat productivity of the first device caused by aggravation of the power consumption.
Further, after receiving the request message sent by the first device to the service device of the school bus via the wireless access point, the service device of the school bus may further perform the following operations:
the service device of the school bus can identify the user attribute of the first device according to the identity information of the first device carried in the request message, wherein the user attribute can include the user name of the user (such as a student) to which the first device belongs and a curriculum schedule corresponding to the user grade, and the curriculum schedule can include the class time (including date and time) of each subject and the class place of each subject;
the service equipment of the school bus determines the class-taking place of the target subject from the curriculum schedule, wherein the class-taking time of the target subject is closest to the current system time of the service equipment of the school bus;
and when detecting that the school bus runs to the getting-off station corresponding to the class point of the target department, the service equipment of the school bus sends a station arrival notification message to the first equipment through the wireless access point, wherein the notification message comprises the class time and the class point of the target department.
By implementing the embodiment, the user can be prevented from missing the place of class in the process of taking the school bus and listening to the contents to be reported.
Example four
As shown in fig. 4, an embodiment of the present invention provides an apparatus, where the apparatus is a first apparatus, and the first apparatus includes:
the processing module 401 is configured to perform rasterization processing on a first picture to be transmitted to obtain first raster data;
an encoding module 402, configured to encode the first raster data to obtain first encoded data;
a transmission module 403, configured to transmit the first encoded data to the second device.
The embodiment of the invention provides that the first device can perform rasterization processing on a first picture to be transmitted to obtain first raster data, encode the first raster data to obtain first encoded data, and transmit the first encoded data to the second device. By the scheme, the first device can perform rasterization processing on the picture, encode the processed raster data and transmit the encoded raster data to the second device, and the data volume of the picture after rasterization is greatly reduced, so that the picture transmission efficiency can be improved.
Optionally, with reference to fig. 4, as shown in fig. 5, the first device further includes:
a transmission module 403, configured to receive second encoded data sent by the second device;
a decoding module 404, configured to decode the second encoded data to obtain second raster data;
a converting module 405, configured to convert the second raster data into a picture to obtain a second picture.
Optionally, the first device is a terminal device and the second device is a server, or the first device is a server and the second device is a terminal device.
Optionally, with reference to fig. 5, as shown in fig. 6, the first device further includes:
the detection module 406 is configured to, after the conversion module 405 converts the second raster data into a picture to obtain a second picture, detect that the user reads the second picture in a reading mode;
an identifying module 407, configured to identify the first content in the second picture;
an obtaining module 408, configured to obtain, in the database, second content that matches the first content;
and a reading module 409 for reading the second content.
Optionally, in fig. 6, the first device may further include:
and a display module 410, configured to, after the obtaining module 408 obtains the second content matching the first content in the database, display the second content, and display the target content associated with the second content in the form of thumbnail.
As an optional implementation manner, the detection module 406 is specifically configured to, after the conversion module 405 converts the second raster data into a picture to obtain a second picture, detect that the user reads the second picture with a finger in a click mode;
the first device may further comprise the following not illustrated modules:
the fingerprint acquisition module is used for acquiring fingerprint information of the finger of the user after the detection module 406 detects that the finger of the user reads the second picture;
the voiceprint acquisition module is used for acquiring a first preset voiceprint characteristic bound with preset fingerprint information under the condition that the acquired fingerprint information of the user finger is matched with the preset fingerprint information;
the reading module 409 is specifically configured to read the second content by using a first preset voiceprint feature.
The preset fingerprint information and the first preset voiceprint feature can be bound for the user in advance and stored in the first device.
In the above optional implementation manner, when the user reads the second picture by pointing with the finger, the first device may acquire the fingerprint information of the finger of the user, acquire the preset voiceprint feature bound with the preset fingerprint information under the condition that the acquired fingerprint information of the finger of the user matches with the preset fingerprint information, and report the second content by using the preset voiceprint feature, so that the first device may report the content read by pointing with the prestored voiceprint feature, thereby making the content more personalized and improving the user experience.
Further, if the preset fingerprint information is bound with a plurality of preset voiceprint features, the voiceprint acquisition module is configured to acquire the plurality of preset voiceprint features bound with the preset fingerprint information under the condition that the acquired fingerprint information of the finger of the user is matched with the preset fingerprint information;
the display module 410 is further configured to display a plurality of selection identifiers, where each selection identifier is used to indicate a preset voiceprint feature.
The reading module 409 may further include the following sub-modules, not shown:
the receiving submodule is used for receiving touch input of a user to the first selection identification;
and the reading sub-module is used for responding to the touch input and reading the second content by the first equipment by adopting a first preset voiceprint characteristic.
The first selection mark is a mark used for indicating a first preset voiceprint feature in the plurality of selection marks.
In the optional implementation manner, if the preset fingerprint information is bound with a plurality of preset voiceprint features, the first device may display a plurality of selection identifiers used for indicating the plurality of preset voiceprint features bound with the preset fingerprint information, and may respond to a touch operation of a user on the selection identifiers, and report the second content by using the preset voiceprint features indicated by the selection identifiers, so that the user may select the voiceprint features during reporting, and click-to-read the content
In an alternative implementation manner, the first device further includes the following modules, not shown in the figure:
a line of sight detection module for detecting whether the line of sight of the user falls on the display screen of the first device after the display module 410 displays the second content;
and the display screen control module is used for controlling the display screen to enter a standby state (namely, the display screen is in a screen-off state) if the sight of the user does not fall on the display screen of the first device.
Optionally, the sight line detection module may specifically include the following sub-modules:
the acquisition submodule is used for acquiring the facial image information of the user and determining the direction of the sight of the user according to the facial information of the user;
the judging submodule is used for judging whether the sight of the user falls on a display screen of the first equipment or not according to the direction of the sight of the user, and if so, determining that the sight of the user falls on the display screen of the first equipment; otherwise, judging that the sight line of the user does not fall on the display screen of the first device according to the direction of the sight line of the user.
Through the optional implementation manner, the first device can detect whether the implementation of the user falls on the display screen of the first device, and can control the display screen to be in a standby state under the condition that the sight of the user does not fall on the display screen of the first device, so that the power consumption of the display screen can be reduced.
As an optional implementation manner, after the display screen control module controls the display screen to enter the standby state, the sight line detection module may further detect a sight line of the user; and the display screen control module is further used for switching the display screen from a standby state to an operating state (namely, the display screen is in a lighting state so that the user can see the content displayed in the display screen) after the sight line detection module detects that the sight line of the user falls on the display screen.
Through the optional implementation mode, the first device can switch the display screen from the standby state to the working state after controlling the display screen to enter the standby state and detecting that the sight of the user falls on the display screen again, so that the display screen can be lightened timely, and the user can conveniently check the content in the display screen.
The detection submodule is used for detecting the ambient noise of the area where the first equipment is located;
the output sub-module is used for outputting prompt information for indicating a user to wear the earphone when the ambient noise degree is greater than the preset noise degree;
and the reading sub-module is used for executing reading operation on the first content at a preset volume when detecting that the first equipment is successfully connected with the earphone.
Through the optional implementation mode, the user can be prompted to wear the earphone through detecting the noise degree of the environment where the first device is located, so that the reading effect is guaranteed.
As an optional implementation manner, the reading sub-module may specifically include the following units not shown in the figure:
and the wireless detection unit is used for detecting whether a first device is currently accessed to a certain wireless access point or not.
The identification unit is used for identifying whether the identification information of the currently accessed wireless access point is matched with the identification information of the wireless access point on a school bus, which is recorded in advance by the first equipment, if so, the first equipment can be considered to be currently located on the school bus;
and the identity acquisition unit is used for acquiring the identity information of the service equipment transferred on the school bus.
The information sending unit is used for sending a request message to the service equipment of the school bus through the wireless access point according to the identity information of the service equipment on the school bus, wherein the request message carries the identity information of the first equipment and a request field, and the request field is used for requesting the service equipment of the school bus to detect the environment noise in the school bus;
the information receiving unit is used for acquiring the environment noise degree in the school bus, sent by the service equipment of the school bus in response to the request information, and taking the environment noise degree in the school bus as the environment noise degree of the area where the first equipment is located; and acquiring the environment noise degree in the school bus, which is sent by the service equipment of the school bus in response to the request message, and taking the environment noise degree in the school bus as the environment noise degree of the area where the first equipment is located.
Further, after receiving the request message sent by the first device to the service device of the school bus via the wireless access point, the service device of the school bus may further perform the following operations:
the service device of the school bus can identify the user attribute of the first device according to the identity information of the first device carried in the request message, wherein the user attribute can include the user name of the user (such as a student) to which the first device belongs and a curriculum schedule corresponding to the user grade, and the curriculum schedule can include the class time (including date and time) of each subject and the class place of each subject;
the service equipment of the school bus determines the class-taking place of the target subject from the curriculum schedule, wherein the class-taking time of the target subject is closest to the current system time of the service equipment of the school bus;
and when detecting that the school bus runs to the getting-off station corresponding to the class point of the target department, the service equipment of the school bus sends a station arrival notification message to the first equipment through the wireless access point, wherein the notification message comprises the class time and the class point of the target department.
By implementing the embodiment, the user can be prevented from missing the place of class in the process of taking the school bus and listening to the contents to be reported.
As shown in fig. 7, an embodiment of the present invention further provides a device, where the device is a first device, and the first device may include:
a memory 501 in which executable program code is stored;
a processor 502 coupled to a memory 501;
the processor 502 calls the executable program code stored in the memory 501 to execute the picture transmission method executed by the first device in the above embodiments of the methods.
It should be noted that the first device shown in fig. 7 may further include components, which are not shown, such as a battery, an input key, a speaker, a microphone, a screen, an RF circuit, a Wi-Fi module, a bluetooth module, and a sensor, which are not described in detail in this embodiment.
Embodiments of the present invention provide a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute some or all of the steps of the method as in the above method embodiments.
Embodiments of the present invention also provide a computer program product, wherein the computer program product, when run on a computer, causes the computer to perform some or all of the steps of the method as in the above method embodiments.
Embodiments of the present invention further provide an application publishing platform, where the application publishing platform is configured to publish a computer program product, where the computer program product, when running on a computer, causes the computer to perform some or all of the steps of the method in the above method embodiments.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are exemplary and alternative embodiments, and that the acts and modules illustrated are not required in order to practice the invention.
The first device provided in the embodiment of the present invention can implement each process shown in the above method embodiments, and is not described here again to avoid repetition.
In various embodiments of the present invention, it should be understood that the sequence numbers of the above-mentioned processes do not imply an inevitable order of execution, and the execution order of the processes should be determined by their functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present invention, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of each embodiment of the present invention.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by instructions associated with hardware, which may be stored in a computer-readable storage medium, such as Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-Read-Only Memory (CD-ROM), or other Memory, magnetic disk Memory, magnetic tape Memory, or other Memory Or any other medium which can be used to carry or store data and which can be read by a computer.

Claims (10)

1. A picture transmission method is applied to a first device, and comprises the following steps:
rasterizing a first picture to be transmitted to obtain first raster data;
encoding the first raster data to obtain first encoded data;
transmitting the first encoded data to a second device.
2. The method of claim 1, further comprising:
receiving second coded data sent by second equipment;
decoding the second coded data to obtain second raster data;
and converting the second raster data into a picture to obtain a second picture.
3. The method according to claim 1 or 2, wherein the first device is a terminal device and the second device is a server, or the first device is a server and the second device is a terminal device.
4. The method of claim 1 or 2, wherein after converting the second raster data into a picture to obtain a second picture, the method further comprises:
detecting that a user reads the second picture in a reading mode;
identifying first content in the second picture;
acquiring second content matched with the first content in a database;
and reading the second content.
5. The method of claim 4, wherein after retrieving the second content matching the first content in the database, the method further comprises:
displaying the second content, and displaying target content associated with the second content in the form of thumbnail images.
6. An apparatus, characterized in that the apparatus is a first apparatus comprising:
the processing module is used for carrying out rasterization processing on a first picture to be transmitted so as to obtain first raster data;
the encoding module is used for encoding the first raster data to obtain first encoded data;
and the transmission module is used for transmitting the first coded data to second equipment.
7. The apparatus of claim 6, wherein the first apparatus further comprises:
the receiving module is used for receiving second coded data sent by second equipment;
the decoding module is used for decoding the second coded data to obtain second raster data;
and the conversion module is used for converting the second raster data into a picture so as to obtain a second picture.
8. The device according to claim 6 or 7, wherein the first device is a terminal device and the second device is a server, or the first device is a server and the second device is a terminal device.
9. The apparatus of claim 6 or 7, wherein the first apparatus further comprises:
the detection module is used for detecting that a user reads the second picture in a reading mode after the conversion module converts the second raster data into the picture to obtain the second picture;
the identification module is used for identifying first content in the second picture;
the acquisition module is used for acquiring second content matched with the first content in a database;
and the reading module is used for reading the second content.
10. The apparatus of claim 9, wherein the first apparatus further comprises:
and the display module is used for displaying the second content matched with the first content after the acquisition module acquires the second content in the database, and displaying the target content associated with the second content in a thumbnail mode.
CN201910494959.8A 2019-06-06 2019-06-06 Picture transmission method and device Active CN111080726B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910494959.8A CN111080726B (en) 2019-06-06 2019-06-06 Picture transmission method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910494959.8A CN111080726B (en) 2019-06-06 2019-06-06 Picture transmission method and device

Publications (2)

Publication Number Publication Date
CN111080726A true CN111080726A (en) 2020-04-28
CN111080726B CN111080726B (en) 2023-05-23

Family

ID=70310066

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910494959.8A Active CN111080726B (en) 2019-06-06 2019-06-06 Picture transmission method and device

Country Status (1)

Country Link
CN (1) CN111080726B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1304652A2 (en) * 2001-10-13 2003-04-23 Mann + Maus OHG Method for compression and transmission of image data
CN101465747A (en) * 2007-12-21 2009-06-24 高德软件有限公司 Method for adapting grille picture between server and client
US20090210919A1 (en) * 2006-07-03 2009-08-20 Bejing Huaqi Information Digital Technology Co., Ltd Point-Reading Device and Method for Obtaining the Network Audio/Video Files
CN102067592A (en) * 2008-06-17 2011-05-18 思科技术公司 Time-shifted transport of multi-latticed video for resiliency from burst-error effects
CN102521360A (en) * 2011-12-15 2012-06-27 北京地拓科技发展有限公司 Raster data transmission method and system
CN106022011A (en) * 2016-05-30 2016-10-12 合欢森林网络科技(北京)有限公司 Image-based confidential information spreading method, device and system
US20180262495A1 (en) * 2016-06-07 2018-09-13 Tencent Technology (Shenzhen) Company Limited Method and apparatus for data transmission between terminals
CN109272605A (en) * 2017-07-18 2019-01-25 美的智慧家居科技有限公司 Access control method and device
CN109635532A (en) * 2018-12-05 2019-04-16 上海碳蓝网络科技有限公司 A kind of picture pick-up device and its binding method
CN109830248A (en) * 2018-12-14 2019-05-31 维沃移动通信有限公司 A kind of audio recording method and terminal device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1304652A2 (en) * 2001-10-13 2003-04-23 Mann + Maus OHG Method for compression and transmission of image data
US20090210919A1 (en) * 2006-07-03 2009-08-20 Bejing Huaqi Information Digital Technology Co., Ltd Point-Reading Device and Method for Obtaining the Network Audio/Video Files
CN101465747A (en) * 2007-12-21 2009-06-24 高德软件有限公司 Method for adapting grille picture between server and client
CN102067592A (en) * 2008-06-17 2011-05-18 思科技术公司 Time-shifted transport of multi-latticed video for resiliency from burst-error effects
CN102521360A (en) * 2011-12-15 2012-06-27 北京地拓科技发展有限公司 Raster data transmission method and system
CN106022011A (en) * 2016-05-30 2016-10-12 合欢森林网络科技(北京)有限公司 Image-based confidential information spreading method, device and system
US20180262495A1 (en) * 2016-06-07 2018-09-13 Tencent Technology (Shenzhen) Company Limited Method and apparatus for data transmission between terminals
CN109272605A (en) * 2017-07-18 2019-01-25 美的智慧家居科技有限公司 Access control method and device
CN109635532A (en) * 2018-12-05 2019-04-16 上海碳蓝网络科技有限公司 A kind of picture pick-up device and its binding method
CN109830248A (en) * 2018-12-14 2019-05-31 维沃移动通信有限公司 A kind of audio recording method and terminal device

Also Published As

Publication number Publication date
CN111080726B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
US9753629B2 (en) Terminal and controlling method thereof
CN115525383B (en) Wallpaper display method and device, mobile terminal and storage medium
KR20200015267A (en) Electronic device for determining an electronic device to perform speech recognition and method for the same
CN114422640B (en) Equipment recommendation method and electronic equipment
CN108881979B (en) Information processing method and device, mobile terminal and storage medium
US20220382788A1 (en) Electronic device and method for operating content using same
CN109062648B (en) Information processing method and device, mobile terminal and storage medium
US20100056188A1 (en) Method and Apparatus for Processing a Digital Image to Select Message Recipients in a Communication Device
CN108509818B (en) Two-dimensional code scanning method and device and computer readable storage medium
CN112823519B (en) Video decoding method, device, electronic equipment and computer readable storage medium
CN111080726B (en) Picture transmission method and device
CN113141605A (en) Data transmission method and device, terminal equipment and readable storage medium
KR101688946B1 (en) Signal processing apparatus and method thereof
CN113111894A (en) Number classification method and device
CN111953980A (en) Video processing method and device
KR20140027826A (en) Apparatus and method for displaying a content in a portabel terminal
CN116055631B (en) Code scanning prompt method and related electronic equipment
US20220321638A1 (en) Processing method and device, electronic device, and computer-readable storage medium
JP2013539100A (en) Method and apparatus for integrating document information
CN113190404B (en) Scene recognition method and device, electronic equipment and computer-readable storage medium
CN114513527B (en) Information processing method, terminal equipment and distributed network
KR102099400B1 (en) Apparatus and method for displaying an image in a portable terminal
CN112291586A (en) Cloud terminal equipment and cloud terminal system
CN111291287A (en) Multimedia file uploading method and device and computer equipment
CN115933924A (en) Bar code display method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant