CN113986018B - Vision impairment auxiliary reading and learning method and system based on intelligent glasses and storage medium - Google Patents

Vision impairment auxiliary reading and learning method and system based on intelligent glasses and storage medium Download PDF

Info

Publication number
CN113986018B
CN113986018B CN202111646194.9A CN202111646194A CN113986018B CN 113986018 B CN113986018 B CN 113986018B CN 202111646194 A CN202111646194 A CN 202111646194A CN 113986018 B CN113986018 B CN 113986018B
Authority
CN
China
Prior art keywords
reading
text
paragraph
display panel
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111646194.9A
Other languages
Chinese (zh)
Other versions
CN113986018A (en
Inventor
孙立
胡金鑫
刘晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Small Walnut Technology Co ltd
Original Assignee
Jiangxi Yingchuang Information Industry Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Yingchuang Information Industry Co ltd filed Critical Jiangxi Yingchuang Information Industry Co ltd
Priority to CN202111646194.9A priority Critical patent/CN113986018B/en
Publication of CN113986018A publication Critical patent/CN113986018A/en
Application granted granted Critical
Publication of CN113986018B publication Critical patent/CN113986018B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Controls And Circuits For Display Device (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a vision-impaired auxiliary reading and learning method, a system and a storage medium based on intelligent glasses, wherein the method comprises the following steps: receiving a key starting signal and sending a reading starting signal to a reading display panel; the reading display board requests to return appointed reading text data to the terminal server and displays the appointed reading text data according to the reading starting signal; performing projection scanning on a display surface of the reading display panel; judging whether the projection scanning frame completely covers the boundary frame mark points corresponding to the reading display panel; if so, photographing the currently displayed characters to acquire picture information, and identifying and extracting to acquire currently extracted character content information and currently extracted character typesetting information; judging whether the character content information and the character typesetting information which are correspondingly stored in the terminal server are matched or not; if yes, reading and learning are carried out. The invention can intuitively display the text content in the terminal server on the reading display board according to the mode of simulating book typesetting, thereby facilitating reading and learning of people with visual impairment.

Description

Vision impairment auxiliary reading and learning method and system based on intelligent glasses and storage medium
Technical Field
The invention relates to the technical field of intelligent glasses application, in particular to a vision-impaired auxiliary reading and learning method and system based on intelligent glasses and a storage medium.
Background
With the increasing innovation of information technology, the conventional products are also developing in the direction of function integration, execution intelligence, and operation convenience. For example, glasses are a very common object in daily life, and in recent years, a technical change mainly based on smart glasses is also in progress.
Specifically, the smart glasses, also called smart glasses, refer to "as a smart phone, have an independent operating system, and the smart glasses can be installed with programs provided by software service providers such as software and games by users. The intelligent glasses can complete functions of adding schedules, map navigation, interacting with friends, taking photos and videos, performing video calls with friends and the like through voice or action control, and can realize the general name of the glasses with wireless network access through a mobile communication network. Currently, most of the smart glasses focus on applications in the aspect of augmented reality technology.
As is known, the blind or the group with vision diseases generally have reading obstacles, and cannot read and learn like ordinary normal people, which greatly affects the quality of life and limits the promotion of self. Therefore, an intelligent glasses system with a better reading assisting function needs to be designed to meet the practical application requirements.
Disclosure of Invention
Therefore, the invention aims to solve the problems that in the prior art, the existing blind people or groups with vision diseases generally have reading obstacles and cannot read and learn like ordinary normal people, which influences the quality of life to a great extent and limits the improvement of self.
The invention provides a vision impairment auxiliary reading learning method based on intelligent glasses, which is applied to the intelligent glasses, wherein the intelligent glasses and a reading display panel are controlled in an interactive mode, data transmission exists among the intelligent glasses, the reading display panel and a terminal server, a first camera and a first voice broadcaster are arranged on the intelligent glasses, and a second voice broadcaster is arranged on the reading display panel, wherein the method comprises the following steps:
the method comprises the following steps: when the intelligent glasses receive a key enabling signal, sending a reading starting signal to the reading display panel, wherein the reading starting signal at least comprises target text position information;
step two: the reading display board requests to return appointed reading text data to the terminal server according to the reading starting signal, and the appointed reading text data is displayed on the reading display board after the loading is successful;
step three: the intelligent glasses perform projection scanning on the display surface of the reading display panel through the first camera, wherein a projection scanning frame is correspondingly arranged during the projection scanning;
step four: judging whether the projection scanning frame completely covers the boundary frame mark points corresponding to the reading display board or not, wherein each boundary frame mark point is encircled to form a display area;
step five: if so, photographing the currently displayed characters in the display area in the reading display panel to acquire picture information, and identifying and extracting the characters in the picture information to acquire currently extracted character content information and currently extracted character typesetting information;
step six: judging whether the current extracted text content information is matched with the text content information correspondingly stored in the terminal server or not and whether the current extracted text typesetting information is matched with the text typesetting information correspondingly stored in the terminal server or not;
step seven: if all coincide, then through the second voice broadcast ware is reported according to the preface every characters in the current extraction text content information.
The invention provides a vision impairment auxiliary reading learning method based on intelligent glasses, which is characterized in that when the intelligent glasses receive a key starting signal, a reading starting signal is sent to a reading display panel, the reading display panel requests a terminal server to return specified reading text data according to the reading starting signal, and the specified reading text data is loaded and displayed on the reading display panel; then, the current displayed characters on the reading display panel are photographed through the intelligent glasses to obtain picture information, the character content information and the character typesetting information in the picture information are extracted through analysis, the matching judgment is further carried out on the character content information and the character typesetting information stored in the terminal server, and if the character content information and the character typesetting information are consistent, each character in the current extracted character content information is broadcasted and learned through the second voice broadcaster. The vision-impaired auxiliary reading learning method based on the intelligent glasses can intuitively display the text contents stored in the terminal server on the reading display panel in a mode of simulating book typesetting, and facilitates reading and learning of vision-impaired people; in addition, due to the fact that touch interactive learning is adopted, the reading learning effect can be improved, and the actual application experience is enhanced.
The vision-impaired reading-assisting learning method based on the intelligent glasses comprises the following steps in the first step:
after the intelligent glasses receive the key starting signal, the intelligent glasses generate first voice prompt information and broadcast the first voice prompt information through the first voice broadcaster, wherein the first voice prompt information comprises text chapter list information;
judging whether a reading confirmation signal returned by a user is received or not within a first preset time after the first voice broadcast device finishes broadcasting, wherein the reading confirmation signal comprises the target text position information;
and if so, controlling the intelligent glasses to send the reading starting signal to the reading display board according to the received reading confirming signal.
In the second step, the method for acquiring the specified reading text data comprises the following steps:
searching and determining corresponding appointed reading text data in the terminal server according to the book name serial number, the chapter number, the target text starting position and the target text length in the target text position information, and returning and loading the appointed reading text data to the reading display board;
the method for displaying on the reading display panel after the loading is successful comprises the following steps:
searching and determining corresponding appointed reading text data in the terminal server, wherein the appointed reading text data comprises appointed reading text typesetting information and appointed reading text length;
and intercepting the text with the corresponding length in the acquired specified reading text data in combination with the specified reading text typesetting information according to the maximum single-page display text length of the reading display panel, and displaying the text on the reading display panel according to the specified reading text typesetting information.
The reading display panel is square, a first boundary frame mark point, a second boundary frame mark point, a third boundary frame mark point and a fourth boundary frame mark point are arranged on the inner side of the reading display panel, and the first boundary frame mark point, the second boundary frame mark point, the third boundary frame mark point and the fourth boundary frame mark point surround the display area;
a plurality of display unit blocks which are equal in size and are uniformly arranged are arranged in the display area, wherein each display unit block is arranged in a touch manner, and only one character is displayed in each display unit block;
in the fourth step, the method for determining whether the projection scan frame completely covers the border frame mark point corresponding to the reading display panel includes the following steps:
and judging whether the marking signals corresponding to the first boundary frame marking point, the second boundary frame marking point, the third boundary frame marking point and the fourth boundary frame marking point can be detected in the projection scanning frame at the same time.
The vision-impaired auxiliary reading learning method based on the intelligent glasses comprises the following steps of:
splitting each paragraph in the specified reading text data to obtain paragraph text data, wherein each paragraph text data comprises a paragraph text length, a paragraph start character and a paragraph end character;
confirming a paragraph start position in the reading display board according to the obtained paragraph start character;
according to the number of display unit blocks of each line of a display area in the reading display panel, sequentially filling the paragraph text data into corresponding display unit blocks in a line-by-line filling mode from the paragraph starting position, and accumulating and counting to obtain the paragraph filling character length corresponding to the current paragraph;
and when the paragraph filling character length is judged to be equal to the paragraph text length, completing the filling display operation of the current paragraph according to the obtained paragraph cut-off symbol.
The vision-impaired reading-assisting learning method based on the intelligent glasses is characterized by further comprising the following steps:
splitting each paragraph in the specified reading text data to obtain paragraph text data, wherein each paragraph text data comprises a paragraph text length, a paragraph start character and a paragraph end character;
and when judging that the text length of the paragraph corresponding to the current paragraph is smaller than the number of display unit blocks in a single line in a display area of the reading display panel, displaying the current paragraph as a single line in the display area.
The vision-impaired reading-assisting learning method based on the intelligent glasses comprises the following steps:
when the second voice broadcast device starts broadcasting, timing immediately to obtain the current learning time, wherein the second voice broadcast device is used for broadcasting each character in the currently extracted character content information in sequence;
accumulating and calculating the times of repeated clicks of each display unit block in the display area in the current learning time to obtain the corresponding times of clicks of the unit blocks;
when the clicking times of the unit block exceed the preset clicking times, confirming that the words corresponding to the display unit block are key marked words;
and acquiring the number of key marked words in a display area of the reading display board in the current learning time, and calculating to obtain a reading learning difficulty value according to the current learning time.
The vision impairment auxiliary reading learning method based on the intelligent glasses is characterized in that a calculation formula corresponding to the reading learning difficulty value is as follows:
Figure 532884DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 191399DEST_PATH_IMAGE002
representing the calculated reading learning difficulty value,
Figure 825642DEST_PATH_IMAGE003
a base difficulty score is indicated that represents the base difficulty score,
Figure 985228DEST_PATH_IMAGE004
the number of highlight marker words representing the display area,
Figure 21317DEST_PATH_IMAGE005
indicating the total number of display unit blocks in the display area in the reading display panel,
Figure 116312DEST_PATH_IMAGE006
a first scoring coefficient is represented that is a function of,
Figure 598371DEST_PATH_IMAGE007
a second scoring coefficient is represented that is a function of,
Figure 702594DEST_PATH_IMAGE008
indicates the current learning time of the current time,
Figure 186665DEST_PATH_IMAGE009
indicating the standard learning time.
The invention also provides a vision impairment auxiliary reading learning system based on the intelligent glasses, which comprises the intelligent glasses, a reading display board and a terminal server, wherein the intelligent glasses and the reading display board are controlled in an interactive way, data transmission exists among the intelligent glasses, the reading display board and the terminal server, the intelligent glasses are provided with a first camera and a first voice broadcaster, the reading display board is provided with a second voice broadcaster, wherein,
the smart glasses are configured to:
when a key enabling signal is received, sending a reading starting signal to the reading display panel, wherein the reading starting signal at least comprises target text position information;
the reading display board is used for:
requesting to return appointed reading text data to the terminal server according to the reading starting signal, and displaying on the reading display board after the loading is successful;
the smart glasses are further configured to:
the display surface of the reading display panel is subjected to projection scanning through the arranged first camera, wherein a projection scanning frame is correspondingly arranged during the projection scanning;
judging whether the projection scanning frame completely covers the boundary frame mark points corresponding to the reading display board or not, wherein each boundary frame mark point is encircled to form a display area;
if so, photographing the currently displayed characters in the display area in the reading display panel to acquire picture information, and identifying and extracting the characters in the picture information to acquire currently extracted character content information and currently extracted character typesetting information;
the reading display panel is further configured to:
judging whether the current extracted text content information is matched with the text content information correspondingly stored in the terminal server or not and whether the current extracted text typesetting information is matched with the text typesetting information correspondingly stored in the terminal server or not;
if all coincide, then through the second voice broadcast ware is reported according to the preface every characters in the current extraction text content information.
The invention further provides a storage medium on which a computer program is stored, wherein the program is executed by a processor to implement the method for reading and learning with assistance for the impaired vision based on smart glasses as described above.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of embodiments of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a flowchart of a vision-impaired reading-assisting learning method based on smart glasses according to a first embodiment of the present invention;
FIG. 2 is a flowchart of a method for generating and transmitting a reading initiation signal according to a first embodiment of the present invention;
fig. 3 is a flowchart of a vision-impaired reading-assisting learning method based on smart glasses according to a second embodiment of the present invention;
fig. 4 is a flowchart of a vision-impaired reading-assisting learning method based on smart glasses according to a third embodiment of the present invention;
fig. 5 is a flowchart of a vision-impaired reading-assisting learning method based on smart glasses according to a fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The blind or the group with vision diseases have the common reading obstacles, and the blind or the group with vision diseases cannot read and learn like ordinary normal people, which influences the life quality to a great extent and limits the promotion of self. Therefore, an intelligent glasses system with a better reading assisting function needs to be designed to meet the practical application requirements.
Example one
In order to solve the technical problem, please refer to fig. 1, the present invention provides a vision-impairment assisted reading learning method based on smart glasses, which is applied to the smart glasses, wherein the smart glasses and a reading display panel are controlled interactively, data transmission exists among the smart glasses, the reading display panel and a terminal server, a first camera and a first voice broadcaster are arranged on the smart glasses, and a second voice broadcaster is arranged on the reading display panel, wherein the method comprises the following steps:
s101, when the intelligent glasses receive a key enabling signal, a reading starting signal is sent to the reading display panel, and the reading starting signal at least comprises target text position information.
Specifically, referring to fig. 2, in this step, the following steps are included:
s1011, after the intelligent glasses receive the key enabling signal, the intelligent glasses generate first voice prompt information and broadcast the first voice prompt information through the first voice broadcast device, wherein the first voice prompt information comprises text chapter list information.
Specifically, after the smart glasses receive the key enabling signal, the smart glasses may obtain a text chapter list corresponding to each book from the terminal server. For example, 5 books are stored in the terminal server. At this moment, the name of the 5 books can be broadcasted according to the sequence by the first voice prompt message. After the user selects one of the books through voice confirmation, the first voice broadcast device in the intelligent glasses can broadcast the chapter list corresponding to the selected book through voice. For example, section 2 of chapter 1, century solitary.
And S1012, judging whether a reading confirmation signal returned by the user is received or not within a first preset time after the first voice broadcasting device finishes broadcasting, wherein the reading confirmation signal comprises target text position information.
In this step, the confirmation reading signal returned by the user needs to be acquired within the first preset time after the first voice broadcast device finishes broadcasting, otherwise, the device is standby again.
Specifically, the target text position information includes a book title serial number, a chapter number, a target text start position, and a target text length. For example, the book title number of "century solitary" is 4, the chapter number of the target text to be read is chapter 1, section 2, the target text start position is the first word of chapter 2, chapter 1, and the target text length is 3000 words.
And S1013, if so, controlling the smart glasses to send a reading starting signal to the reading display board according to the received reading confirming signal.
It can be understood that the target text can be clearly confirmed according to the position information of the target text in the confirmation reading signal. At the moment, the intelligent glasses send a reading confirmation signal to the reading display board so as to activate the reading display board.
And S102, the reading display board requests to return the appointed reading text data to the terminal server according to the reading starting signal, and the appointed reading text data is displayed on the reading display board after the loading is successful.
In this step, the method for acquiring the designated reading text data includes the following steps:
and searching and determining corresponding appointed reading text data in the terminal server according to the book name serial number, the chapter number, the target text starting position and the target text length in the target text position information, and returning and loading the appointed reading text data to the reading display board. The specific obtaining method is described in the above example, and is not described herein again.
Further, the method for displaying on the reading display panel after the loading is successful comprises the following steps:
and S1021, searching and determining corresponding appointed reading text data in the terminal server, wherein the appointed reading text data comprises appointed reading text typesetting information and appointed reading text length.
And S1022, intercepting the text with the corresponding length from the acquired specified reading text data by combining the specified reading text typesetting information according to the maximum single-page display text length of the reading display panel, and displaying the text on the reading display panel according to the specified reading text typesetting information.
It should be noted that, in the present embodiment, the reading display panel is square. The inner side of the reading display board is provided with a first boundary frame mark point, a second boundary frame mark point, a third boundary frame mark point and a fourth boundary frame mark point. The display area is defined by the first boundary frame mark point, the second boundary frame mark point, the third boundary frame mark point and the fourth boundary frame mark point.
In the present embodiment, a plurality of display unit blocks with the same size and arranged uniformly are arranged in the display area. Each display unit block is arranged in a touch manner, and only one character is displayed in each display unit block. It is understood that the maximum display text length of a single page of the reading display panel, i.e. the total number of display unit blocks in the reading display panel (for example, in the present embodiment, the reading display panel is arranged in 15 rows and 20 columns, i.e. contains 300 display unit blocks, i.e. the maximum display text length of a single page is 300 words).
Further, in this step, the method for obtaining the text with the corresponding length from the obtained specified reading text data by combining the specified reading text typesetting information includes the following steps:
s1022a, each paragraph in the reading-designated text data is parsed to obtain paragraph text data, wherein each paragraph text data includes a paragraph text length, a paragraph start character, and a paragraph end character.
And processing and splitting in paragraph units after the specified reading text data is located in the terminal server. Specifically, a paragraph is separated and processed through a paragraph start character, a paragraph end character and a corresponding paragraph text length. For example, in "department of centuries" with the title number of 4, the chapter number of the target text to be read is chapter 1, section 2, the target text start position is the first word of chapter 1, section 2, and the target text length is 3000 words (this is the object to be read as a whole).
In this step, since the maximum displayed text length of a single page is 300 characters, the first segment of characters in chapter 1, section 2 of "century solitary" is processed first. For example, the number of the first segment of text is 157, and all the content of the first segment is filled in the reading display panel. During the process of importing padding, positioning is carried out through paragraph start characters and paragraph end characters.
S1022b, the start position of the paragraph on the reading display panel is confirmed based on the retrieved paragraph start character.
It should be added here that the position of two display unit blocks is set aside by default before the paragraph start symbol.
S1022c, sequentially filling paragraph text data into corresponding display cell blocks in a line-by-line filling manner from the beginning of the paragraph according to the number of display cell blocks in each line of the display area in the reading display panel, and accumulating the count to obtain the paragraph filling character length corresponding to the current paragraph.
For example, since the display panel is arranged in 15 rows and 20 columns, that is, at most 20 words are arranged per row. Thus, after each row is filled, the next row is automatically switched to. As can be seen from the above example, since the number of the currently processed text segment is 157, all the text of the text segment can be filled in the reading display panel in this way.
It should be noted that, if the number of words of the current word segment is too large, i.e. exceeds 300 words, the maximum number of words that can be accommodated in the display area of the display panel is read. And after the current learning is finished, the filling display is continued according to the original arrangement filling mode.
S1022d, when it is determined that the paragraph filler length is equal to the paragraph text length, completing the filling display operation of the current paragraph according to the obtained paragraph cut-off symbol.
It should be added here that, for a text with a few single-line characters stored in the terminal server, the processing method is as follows:
splitting each paragraph in the designated reading text data to obtain paragraph text data, wherein each paragraph text data comprises a paragraph text length, a paragraph start character and a paragraph end character;
and when judging that the text length of the paragraph corresponding to the current paragraph is smaller than the number of display unit blocks in a single line in a display area of the reading display panel, displaying the current paragraph as a single line in the display area.
S103, the intelligent glasses perform projection scanning on the display surface of the reading display panel through the first camera, wherein a projection scanning frame is correspondingly arranged during the projection scanning.
It can be understood that the first camera is arranged on the intelligent glasses, so that the reading display board after the filling display operation is completed can be photographed by the first camera.
S104, judging whether the projection scanning frame completely covers the boundary frame mark points corresponding to the reading display board, wherein each boundary frame mark point surrounds a display area.
As described above, the first bounding box mark point, the second bounding box mark point, the third bounding box mark point and the fourth bounding box mark point enclose the display area.
In this step, the projection scanning frame corresponding to the first camera of the smart glasses needs to cover the first bounding box mark point, the second bounding box mark point, the third bounding box mark point and the fourth bounding box mark point into the projection scanning frame at the same time. Namely, the marking signals corresponding to the first boundary frame marking point, the second boundary frame marking point, the third boundary frame marking point and the fourth boundary frame marking point need to be detected simultaneously in the projection scanning frame.
And S105, if so, photographing the currently displayed characters in the display area in the reading display panel to acquire picture information, and identifying and extracting the characters in the picture information to acquire currently extracted character content information and currently extracted character typesetting information.
S106, judging whether the current extracted text content information is matched with the text content information correspondingly stored in the terminal server or not, and whether the current extracted text typesetting information is matched with the text typesetting information correspondingly stored in the terminal server or not.
It should be noted that, in this step, it is compared whether the currently extracted text typesetting information matches the text typesetting information stored in the terminal server, and it does not mean that the positions of each text are identical, but the currently extracted text typesetting information needs to satisfy the set typesetting rule. Specifically, the characters originally located at the position of the paragraph start character and the paragraph stop character are also located at the position of the paragraph start character and the paragraph stop character when being displayed on the reading display panel, and the number of the characters of the paragraph is not changed, so that the problem of missing characters is not caused.
And S107, if the information is consistent, broadcasting each character in the current extracted character content information according to the sequence by the second voice broadcast device.
It can be understood that when the current extracted text content information and the current extracted text typesetting information are confirmed to be correct, the display typesetting operation of reading the display panel is completed at this time. At the moment, each character in the current extracted character content information is broadcasted according to the sequence through the second voice broadcast device. In the embodiment, the text content stored in the terminal server can be visually displayed on the reading display board according to the mode of simulating book typesetting, so that reading and learning of visually impaired people are facilitated.
The invention provides a vision impairment auxiliary reading learning method based on intelligent glasses, which is characterized in that when the intelligent glasses receive a key starting signal, a reading starting signal is sent to a reading display panel, the reading display panel requests a terminal server to return specified reading text data according to the reading starting signal, and the specified reading text data is loaded and displayed on the reading display panel; then, the current displayed characters on the reading display panel are photographed through the intelligent glasses to obtain picture information, the character content information and the character typesetting information in the picture information are extracted through analysis, the matching judgment is further carried out on the character content information and the character typesetting information stored in the terminal server, and if the character content information and the character typesetting information are consistent, each character in the current extracted character content information is broadcasted and learned through the second voice broadcaster. The vision-impaired auxiliary reading learning method based on the intelligent glasses can intuitively display the text contents stored in the terminal server on the reading display panel in a mode of simulating book typesetting, and facilitates reading and learning of vision-impaired people; in addition, due to the fact that touch interactive learning is adopted, the reading learning effect can be improved, and the actual application experience is enhanced.
Example two
Referring to fig. 3, a second embodiment of the present invention provides a vision-impaired reading-assisting learning method based on smart glasses, which specifically includes the following steps:
s201, when the intelligent glasses receive a key enabling signal, a reading starting signal is sent to the reading display panel, and the reading starting signal at least comprises target text position information.
S202, the reading display board requests the terminal server to return the specified reading text data according to the reading starting signal, displays the text data on the reading display board after the loading is successful, and generates a sketch request inquiry instruction.
The above steps S201 and S202 have already been explained above. And will not be described in detail herein.
And S203, after receiving a confirmation signal which is returned by the user and aims at the screenshot request inquiry instruction, performing screenshot on the current display interface of the reading display board to obtain screenshot picture information.
Inconvenience brought to a certain degree is taken to reading the display panel through the camera of the intelligent glasses in order to reduce. In this embodiment, after receiving a confirmation signal for the screenshot request query instruction returned by the user, the current display interface of the reading display panel is subjected to screenshot to obtain screenshot picture information.
And S204, identifying and extracting the characters in the screenshot picture information to obtain the currently extracted character content information and the currently extracted character typesetting information.
S205, judging whether the current extracted text content information is matched with the text content information correspondingly stored in the terminal server, and whether the current extracted text typesetting information is matched with the text typesetting information correspondingly stored in the terminal server.
S206, if all coincide, then pass through the second voice broadcast ware is reported according to the preface every characters in the current extraction literal content information.
It can be understood, in this embodiment, reduced the camera through intelligent glasses and shot reading the display panel, inconvenient problem brought to a certain extent has promoted user experience.
EXAMPLE III
Referring to fig. 4, a third embodiment of the present invention provides a vision-impaired reading-assisting learning method based on smart glasses, which specifically includes the following steps:
and S301, when the second voice broadcast device starts broadcasting, immediately timing to obtain the current learning time, wherein the second voice broadcast device is used for broadcasting each character in the current extracted character content information in sequence.
S302, accumulating and calculating the times of repeated clicks of each display unit block in the display area in the current learning time to obtain the corresponding times of clicks of the unit blocks.
In the actual reading and learning, each display unit block in the reading display panel is arranged in a touch manner. When the user has a question about the text corresponding to one of the display unit blocks, the user may repeatedly click the corresponding display unit block. Additionally, after a certain display unit block is clicked, the reading display board communicates with the terminal server to obtain the explanation corresponding to the difficult word, and the explanation is broadcasted through the second voice broadcaster.
And S303, when the click frequency of the unit block exceeds the preset click frequency, determining that the word corresponding to the display unit block is the key marked word.
It is understood that, for example, the number of clicks of a unit block corresponding to a certain display unit block is 4, and the preset number of clicks is set to 3. At this time, the corresponding display unit block is listed as the key mark word.
S304, acquiring the number of the key mark words in the display area of the reading display board in the current learning time, and calculating to obtain a reading learning difficulty value according to the current learning time.
In this step, the reading learning difficulty value is calculated by the following formula, and the specific corresponding calculation formula is:
Figure 718140DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 795817DEST_PATH_IMAGE002
representing the calculated reading learning difficulty value,
Figure 828364DEST_PATH_IMAGE003
a base difficulty score is indicated that represents the base difficulty score,
Figure 573466DEST_PATH_IMAGE004
the number of highlight marker words representing the display area,
Figure 10264DEST_PATH_IMAGE005
indicating the total number of display unit blocks in the display area in the reading display panel,
Figure 965451DEST_PATH_IMAGE006
a first scoring coefficient is represented that is a function of,
Figure 145896DEST_PATH_IMAGE007
a second scoring coefficient is represented that is a function of,
Figure 745505DEST_PATH_IMAGE008
indicates the current learning time of the current time,
Figure 244882DEST_PATH_IMAGE009
indicating a standard learning time.
Referring to fig. 5, a fourth embodiment of the present invention provides a vision impairment auxiliary reading learning system based on smart glasses, including smart glasses, a reading display panel and a terminal server, where the smart glasses and the reading display panel are controlled interactively, data transmission exists among the smart glasses, the reading display panel and the terminal server, a first camera and a first voice broadcaster are disposed on the smart glasses, a second voice broadcaster is disposed on the reading display panel, and wherein,
the smart glasses are configured to:
when a key enabling signal is received, sending a reading starting signal to the reading display panel, wherein the reading starting signal at least comprises target text position information;
the reading display panel is used for:
requesting to return designated reading text data to the terminal server according to the reading starting signal, and displaying the designated reading text data on the reading display panel after the reading text data is loaded successfully;
the smart glasses are further configured to:
the display surface of the reading display panel is subjected to projection scanning through the arranged first camera, wherein a projection scanning frame is correspondingly arranged during the projection scanning;
judging whether the projection scanning frame completely covers the boundary frame mark points corresponding to the reading display board or not, wherein each boundary frame mark point is encircled to form a display area;
if so, photographing the currently displayed characters in the display area in the reading display panel to acquire picture information, and identifying and extracting the characters in the picture information to acquire currently extracted character content information and currently extracted character typesetting information;
the reading display panel is further configured to:
judging whether the current extracted text content information is matched with the text content information correspondingly stored in the terminal server or not and whether the current extracted text typesetting information is matched with the text typesetting information correspondingly stored in the terminal server or not;
if all coincide, then through the second voice broadcast ware is reported according to the preface every characters in the current extraction text content information.
The invention also provides a storage medium on which a computer program is stored, wherein the program is executed by a processor to implement the method for reading and learning assisted by vision impairment based on smart glasses.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (8)

1. A vision impairment auxiliary reading learning method based on intelligent glasses is applied to the intelligent glasses, interactive control is carried out between the intelligent glasses and a reading display panel, data transmission exists among the intelligent glasses, the reading display panel and a terminal server, a first camera and a first voice broadcast device are arranged on the intelligent glasses, and a second voice broadcast device is arranged on the reading display panel, and is characterized by comprising the following steps:
the method comprises the following steps: when the intelligent glasses receive a key enabling signal, sending a reading starting signal to the reading display panel, wherein the reading starting signal at least comprises target text position information;
step two: the reading display board requests to return appointed reading text data to the terminal server according to the reading starting signal, and the appointed reading text data is displayed on the reading display board after the loading is successful;
step three: the intelligent glasses perform projection scanning on the display surface of the reading display panel through the arranged first camera, wherein a projection scanning frame is correspondingly arranged during the projection scanning;
step four: judging whether the projection scanning frame completely covers the boundary frame mark points corresponding to the reading display board or not, wherein each boundary frame mark point is encircled to form a display area;
step five: if so, photographing the currently displayed characters in the display area in the reading display panel to acquire picture information, and identifying and extracting the characters in the picture information to acquire currently extracted character content information and currently extracted character typesetting information;
step six: judging whether the current extracted text content information is matched with the text content information correspondingly stored in the terminal server or not and whether the current extracted text typesetting information is matched with the text typesetting information correspondingly stored in the terminal server or not;
step seven: if the current extracted text content information is matched with the extracted text content information, sequentially broadcasting each text in the current extracted text content information through the second voice broadcaster;
the method further comprises the steps of:
when the second voice broadcast device starts broadcasting, timing immediately to obtain the current learning time, wherein the second voice broadcast device is used for broadcasting each character in the currently extracted character content information in sequence for a user to learn;
accumulating and calculating the times of repeated clicks of each display unit block in the display area in the current learning time to obtain the corresponding times of clicks of the unit blocks;
when the clicking times of the unit block exceed the preset clicking times, confirming that the words corresponding to the display unit block are key marked words;
acquiring the number of key marker words in a display area of the reading display board within the current learning time, and calculating to obtain a reading learning difficulty value according to the current learning time;
the corresponding calculation formula of the reading learning difficulty value is as follows:
Figure 977854DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 184844DEST_PATH_IMAGE002
representing the calculated reading learning difficulty value,
Figure 891769DEST_PATH_IMAGE003
a base difficulty score is indicated that represents the base difficulty score,
Figure 285841DEST_PATH_IMAGE004
the number of highlight marker words representing the display area,
Figure 103624DEST_PATH_IMAGE005
indicating the total number of display unit blocks in the display area in the reading display panel,
Figure 633963DEST_PATH_IMAGE006
a first scoring coefficient is represented that is a function of,
Figure 511789DEST_PATH_IMAGE007
a second scoring coefficient is represented that is a function of,
Figure 393157DEST_PATH_IMAGE008
indicates the current learning time of the current time,
Figure 14632DEST_PATH_IMAGE009
indicating the standard learning time.
2. The vision impairment assisted reading learning method based on smart glasses as claimed in claim 1, wherein in the step one, the method comprises the following steps:
after the intelligent glasses receive the key starting signal, the intelligent glasses generate first voice prompt information and broadcast the first voice prompt information through the first voice broadcaster, wherein the first voice prompt information comprises text chapter list information;
judging whether a reading confirmation signal returned by a user is received or not within a first preset time after the first voice broadcast device finishes broadcasting, wherein the reading confirmation signal comprises the target text position information;
and if so, controlling the intelligent glasses to send the reading starting signal to the reading display board according to the received reading confirming signal.
3. The vision impairment assistant reading and learning method based on the intelligent glasses as claimed in claim 1, wherein the target text position information includes a book title serial number, a chapter number, a target text start position and a target text length, and in the second step, the method for obtaining the designated reading text data includes the following steps:
searching and determining corresponding appointed reading text data in the terminal server according to the book name serial number, the chapter number, the target text starting position and the target text length in the target text position information, and returning and loading the appointed reading text data to the reading display board;
the method for displaying on the reading display board after the loading is successful comprises the following steps:
searching and determining corresponding appointed reading text data in the terminal server, wherein the appointed reading text data comprises appointed reading text typesetting information and appointed reading text length;
and intercepting the text with the corresponding length in the acquired specified reading text data in combination with the specified reading text typesetting information according to the maximum single-page display text length of the reading display panel, and displaying the text on the reading display panel according to the specified reading text typesetting information.
4. The reading and learning method for assisting in reading the visual impairment based on the smart glasses as claimed in claim 3, wherein the reading display panel is square, and a first border marker point, a second border marker point, a third border marker point and a fourth border marker point are disposed on the inner side of the reading display panel, and the first border marker point, the second border marker point, the third border marker point and the fourth border marker point enclose the display area;
a plurality of display unit blocks which are equal in size and are uniformly arranged are arranged in the display area, wherein each display unit block is arranged in a touch manner, and only one character is displayed in each display unit block;
in the fourth step, the method for determining whether the projection scan frame completely covers the border frame mark point corresponding to the reading display panel includes the following steps:
and judging whether the marking signals corresponding to the first boundary frame marking point, the second boundary frame marking point, the third boundary frame marking point and the fourth boundary frame marking point can be detected in the projection scanning frame at the same time.
5. The vision impairment auxiliary reading learning method based on the intelligent glasses as claimed in claim 4, wherein the method for intercepting the text with the corresponding length from the obtained specified reading text data by combining the specified reading text typesetting information comprises the following steps:
splitting each paragraph in the specified reading text data to obtain paragraph text data, wherein each paragraph text data comprises a paragraph text length, a paragraph start character and a paragraph end character;
confirming a paragraph start position in the reading display board according to the obtained paragraph start character;
according to the number of display unit blocks of each line of a display area in the reading display panel, sequentially filling the paragraph text data into corresponding display unit blocks in a line-by-line filling mode from the beginning position of the paragraph, and accumulating and counting to obtain the paragraph filling character length corresponding to the current paragraph;
and when the paragraph filling character length is judged to be equal to the paragraph text length, completing the filling display operation of the current paragraph according to the obtained paragraph cut-off symbol.
6. The method for learning reading assistance with impaired vision based on smart glasses according to claim 5, further comprising:
splitting each paragraph in the specified reading text data to obtain paragraph text data, wherein each paragraph text data comprises a paragraph text length, a paragraph start character and a paragraph end character;
and when judging that the text length of the paragraph corresponding to the current paragraph is smaller than the number of display unit blocks in a single line in a display area of the reading display panel, displaying the current paragraph as a single line in the display area.
7. A vision impairment auxiliary reading learning system based on intelligent glasses comprises the intelligent glasses, a reading display panel and a terminal server, wherein the intelligent glasses and the reading display panel are controlled in an interactive mode, data transmission exists among the intelligent glasses, the reading display panel and the terminal server, a first camera and a first voice announcer are arranged on the intelligent glasses, a second voice announcer is arranged on the reading display panel, and the vision impairment auxiliary reading learning system is characterized in that,
the smart glasses are configured to:
when a key enabling signal is received, sending a reading starting signal to the reading display panel, wherein the reading starting signal at least comprises target text position information;
the reading display board is used for:
requesting to return appointed reading text data to the terminal server according to the reading starting signal, and displaying on the reading display board after the loading is successful;
the smart glasses are further configured to:
the display surface of the reading display panel is subjected to projection scanning through the arranged first camera, wherein a projection scanning frame is correspondingly arranged during the projection scanning;
judging whether the projection scanning frame completely covers the boundary frame mark points corresponding to the reading display board or not, wherein each boundary frame mark point is encircled to form a display area;
if so, photographing the currently displayed characters in the display area in the reading display panel to acquire picture information, and identifying and extracting the characters in the picture information to acquire currently extracted character content information and currently extracted character typesetting information;
the reading display panel is further configured to:
judging whether the current extracted text content information is matched with the text content information correspondingly stored in the terminal server or not and whether the current extracted text typesetting information is matched with the text typesetting information correspondingly stored in the terminal server or not;
if the current extracted text content information is matched with the extracted text content information, sequentially broadcasting each text in the current extracted text content information through the second voice broadcaster;
the reading display panel is further configured to:
when the second voice broadcast device starts broadcasting, timing immediately to obtain the current learning time, wherein the second voice broadcast device is used for broadcasting each character in the currently extracted character content information in sequence for a user to learn;
accumulating and calculating the times of repeated clicks of each display unit block in the display area in the current learning time to obtain the corresponding times of clicks of the unit blocks;
when the clicking times of the unit block exceed the preset clicking times, confirming that the words corresponding to the display unit block are key marked words;
acquiring the number of key marker words in a display area of the reading display board within the current learning time, and calculating to obtain a reading learning difficulty value according to the current learning time;
the corresponding calculation formula of the reading learning difficulty value is as follows:
Figure 665056DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 448204DEST_PATH_IMAGE002
representing the calculated reading learning difficulty value,
Figure 551289DEST_PATH_IMAGE003
a base difficulty score is indicated that represents the base difficulty score,
Figure 979384DEST_PATH_IMAGE004
the number of highlight labels representing the display area,
Figure 749894DEST_PATH_IMAGE005
indicating the total number of display unit blocks in the display area in the reading display panel,
Figure 703943DEST_PATH_IMAGE006
a first scoring coefficient is represented that is a function of,
Figure 294325DEST_PATH_IMAGE007
a second scoring coefficient is represented that is a function of,
Figure 257601DEST_PATH_IMAGE008
indicates the current learning time of the current time,
Figure 617039DEST_PATH_IMAGE009
indicating the standard learning time.
8. A storage medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method of reading and learning based on smart eyewear vision-aided as recited in any one of claims 1 to 6.
CN202111646194.9A 2021-12-30 2021-12-30 Vision impairment auxiliary reading and learning method and system based on intelligent glasses and storage medium Active CN113986018B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111646194.9A CN113986018B (en) 2021-12-30 2021-12-30 Vision impairment auxiliary reading and learning method and system based on intelligent glasses and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111646194.9A CN113986018B (en) 2021-12-30 2021-12-30 Vision impairment auxiliary reading and learning method and system based on intelligent glasses and storage medium

Publications (2)

Publication Number Publication Date
CN113986018A CN113986018A (en) 2022-01-28
CN113986018B true CN113986018B (en) 2022-08-09

Family

ID=79734943

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111646194.9A Active CN113986018B (en) 2021-12-30 2021-12-30 Vision impairment auxiliary reading and learning method and system based on intelligent glasses and storage medium

Country Status (1)

Country Link
CN (1) CN113986018B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115439854B (en) * 2022-09-05 2023-05-02 深圳市学之友科技有限公司 Scanning display method based on interconnection of scanning pen and intelligent terminal

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FI20000691A0 (en) * 2000-03-24 2000-03-24 Markku Leinonen Procedure for measuring reading ability and aids therefore
CN104143084A (en) * 2014-07-17 2014-11-12 武汉理工大学 Auxiliary reading glasses for visual impairment people
CN104484355A (en) * 2014-11-28 2015-04-01 广东小天才科技有限公司 Method and terminal for assisting users in new word consolidation before and after reading
CN106023018A (en) * 2016-05-23 2016-10-12 华中师范大学 Online reading capability evaluating method and system
US9558159B1 (en) * 2015-05-15 2017-01-31 Amazon Technologies, Inc. Context-based dynamic rendering of digital content
CN112712806A (en) * 2020-12-31 2021-04-27 南方科技大学 Auxiliary reading method and device for visually impaired people, mobile terminal and storage medium
CN113052730A (en) * 2021-02-23 2021-06-29 杭州一亩教育科技有限公司 Child grading reading assisting method and system for self-help reading ability acquisition
CN113554338A (en) * 2021-08-03 2021-10-26 成都纷极科技有限公司 Method for evaluating reading ability of user

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110078561A1 (en) * 2009-09-28 2011-03-31 Daniel Herzner Method and system of formatting text in an electronic document to increase reading speed
CN101986290A (en) * 2010-06-30 2011-03-16 汉王科技股份有限公司 Electronic reader and document typesetting method thereof
CN103631506B (en) * 2012-08-24 2018-09-04 腾讯科技(深圳)有限公司 Reading method based on terminal and corresponding terminal
CN104983511A (en) * 2015-05-18 2015-10-21 上海交通大学 Voice-helping intelligent glasses system aiming at totally-blind visual handicapped
CN107391742A (en) * 2017-08-09 2017-11-24 上海斐讯数据通信技术有限公司 A kind of article reads recording method and the system of progress
CN110287830A (en) * 2019-06-11 2019-09-27 广州市小篆科技有限公司 Intelligence wearing terminal, cloud server and data processing method
EP4000056A1 (en) * 2019-07-19 2022-05-25 Gifted Bill Enterprises LLC System and method for improving reading skills of users with reading disability symptoms
CN111860121B (en) * 2020-06-04 2023-10-24 上海翎腾智能科技有限公司 Reading ability auxiliary evaluation method and system based on AI vision
CN111643324A (en) * 2020-07-13 2020-09-11 江苏中科智能制造研究院有限公司 Intelligent glasses for blind people

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FI20000691A0 (en) * 2000-03-24 2000-03-24 Markku Leinonen Procedure for measuring reading ability and aids therefore
CN104143084A (en) * 2014-07-17 2014-11-12 武汉理工大学 Auxiliary reading glasses for visual impairment people
CN104484355A (en) * 2014-11-28 2015-04-01 广东小天才科技有限公司 Method and terminal for assisting users in new word consolidation before and after reading
US9558159B1 (en) * 2015-05-15 2017-01-31 Amazon Technologies, Inc. Context-based dynamic rendering of digital content
CN106023018A (en) * 2016-05-23 2016-10-12 华中师范大学 Online reading capability evaluating method and system
CN112712806A (en) * 2020-12-31 2021-04-27 南方科技大学 Auxiliary reading method and device for visually impaired people, mobile terminal and storage medium
CN113052730A (en) * 2021-02-23 2021-06-29 杭州一亩教育科技有限公司 Child grading reading assisting method and system for self-help reading ability acquisition
CN113554338A (en) * 2021-08-03 2021-10-26 成都纷极科技有限公司 Method for evaluating reading ability of user

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于智能终端和纸媒的9岁儿童阅读能力比较研究;姜洪伟等;《新世纪图书馆》;20200120(第01期);全文 *

Also Published As

Publication number Publication date
CN113986018A (en) 2022-01-28

Similar Documents

Publication Publication Date Title
US20100100904A1 (en) Comment distribution system, comment distribution server, terminal device, comment distribution method, and recording medium storing program
US8599309B2 (en) Method and system for identifying addressing data within a television presentation
KR101050866B1 (en) Character recognition devices, character recognition programs, and character recognition methods
CN106375860B (en) Video playing method, device, terminal and server
JP6202815B2 (en) Character recognition device, character recognition method, and character recognition program
CN104508689A (en) A two-dimension code processing method and a terminal
CN102426573A (en) Content analysis apparatus and method
CN113986018B (en) Vision impairment auxiliary reading and learning method and system based on intelligent glasses and storage medium
CN108282683A (en) A kind of video clip display methods and device
CN112118395A (en) Video processing method, terminal and computer readable storage medium
CN111159975A (en) Display method and device
CN112235632A (en) Video processing method and device and server
CN106201509A (en) A kind of method for information display, device and mobile terminal
CN109543072B (en) Video-based AR education method, smart television, readable storage medium and system
CN104837065A (en) Television terminal-to-mobile terminal two-dimensional code information sharing method and system
CN112306601A (en) Application interaction method and device, electronic equipment and storage medium
US20170279749A1 (en) Modular Communications
CN110381353A (en) Video scaling method, apparatus, server-side, client and storage medium
CN112840305A (en) Font switching method and related product
US20210073458A1 (en) Comic data display system, method, and program
CN114429464A (en) Screen-breaking identification method of terminal and related equipment
CN108632370B (en) Task pushing method and device, storage medium and electronic device
CN110881132B (en) Method and related device for checking distance between live broadcast rooms
CN112650467A (en) Voice playing method and related device
CN107679068B (en) Information importing and displaying method of multimedia file, mobile terminal and storage device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240315

Address after: Room 1301-1303, 13th Floor, Commercial Office Building 1 #, Greenland Expo City, No. 1388 Jiulonghu Avenue, Honggutan District, Nanchang City, Jiangxi Province, 330038

Patentee after: Nanchang small walnut Technology Co.,Ltd.

Country or region after: China

Address before: Room 1814, Commercial Office Building 1 #, Greenland Expo City, No. 1388 Jiulonghu Avenue, Honggutan District, Nanchang City, Jiangxi Province, 330038

Patentee before: Jiangxi Yingchuang Information Industry Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right