CN115167736B - Image-text position adjustment method, image-text position adjustment equipment and storage medium - Google Patents

Image-text position adjustment method, image-text position adjustment equipment and storage medium Download PDF

Info

Publication number
CN115167736B
CN115167736B CN202210795262.6A CN202210795262A CN115167736B CN 115167736 B CN115167736 B CN 115167736B CN 202210795262 A CN202210795262 A CN 202210795262A CN 115167736 B CN115167736 B CN 115167736B
Authority
CN
China
Prior art keywords
user
area
target content
moved
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210795262.6A
Other languages
Chinese (zh)
Other versions
CN115167736A (en
Inventor
梅品西
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Happycast Technology Co Ltd
Original Assignee
Shenzhen Happycast Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Happycast Technology Co Ltd filed Critical Shenzhen Happycast Technology Co Ltd
Priority to CN202210795262.6A priority Critical patent/CN115167736B/en
Publication of CN115167736A publication Critical patent/CN115167736A/en
Application granted granted Critical
Publication of CN115167736B publication Critical patent/CN115167736B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a method, equipment and a storage medium for adjusting image-text positions, which are applied to the technical field of computers, and the method comprises the following steps: determining a to-be-moved area of target content according to the target content written by a first user on a screen and the position of the target content in the screen; and moving the target content to the area to be moved so as to vacate a target area on the screen, wherein the target area belongs to an area conforming to the writing habit of the first user, which is determined according to the historical writing area information of the first user. According to the image-text position adjusting method, from the experience sense of the user, the completed image-text content of the user is moved to other areas, and writing comfort of the user is improved.

Description

Image-text position adjustment method, image-text position adjustment equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method, an apparatus, and a storage medium for adjusting a graphics context position.
Background
The electronic whiteboard is a digital teaching demonstration device or program for replacing the traditional blackboard and chalk, is widely applied to modern conferences and teaching activities, can be completely separated from a mouse and a keyboard, and can be used for editing, annotating and storing computer files, automatically writing on the whiteboard and the like by using fingers or specific pens, thereby bringing great convenience to users.
In a scene of displaying a screen-throwing picture on an electronic whiteboard or a large screen and supporting touch interactive operation, a user can write and draw corresponding image-text contents on the screen in a mode of touching by using media such as a whiteboard pen and a mouse or fingers, and after the user finishes writing on the screen or the whiteboard, the user can write the contents in the next stage in a mode of erasing part of the contents or directly and completely removing the contents.
However, in the actual writing process of the user, most of the image-text content users cannot be directly removed, but need to reserve for subsequent explanation or view for other users, so that the image-text positions on a screen or a whiteboard are concentrated due to factors such as writing habits, pen-down positions, heights and the like of the users, and writing comfort of the users is poor.
Disclosure of Invention
The embodiment of the application provides a method, equipment and storage medium for adjusting image-text positions, which can improve writing comfort.
In a first aspect, an embodiment of the present application provides a method for adjusting a position of an image, where the method includes:
determining a region to be moved of target content according to the target content written by a first user on a screen and the position of the target content in the screen;
And moving the target content to the area to be moved so as to vacate a target area on the screen, wherein the target area belongs to an area conforming to the writing habit of the first user, which is determined according to the historical writing area information of the first user.
Specifically, in modern society, web conferences, screen-casting conferences or video teaching applications are becoming more and more widespread, and even become the most common form to replace traditional conferences and teaching modes, in the above-mentioned scene, a first user can write in a cloud desktop or a virtual electronic whiteboard or an electronic whiteboard, and a viewer can intuitively see the image-text content written by the first user;
however, in the use process of the virtual electronic whiteboard and the physical electronic whiteboard, the same problems as a blackboard used by a teacher in the past exist, namely, the positions of graphics and texts written on the virtual electronic whiteboard or the physical electronic whiteboard are limited by the habit and the height of the first user, so that the utilization rate of the whiteboard is low, the more the content of the graphics and texts written is, the more difficult the writing gesture is, and the watching experience of a viewer is poor;
therefore, the method for moving the target content to other areas on the whiteboard is provided, so that the first user is still in a comfortable area in the subsequent writing process, the writing experience of the first user is improved, and after the target content is moved to the other areas, the situation that the image-text content is too compact and the watching experience of other viewers is poor is avoided.
With reference to the first aspect, in an optional aspect, before determining the area to be moved of the target content according to the target content written by the first user on the screen and the position of the target content in the screen, the method further includes:
detecting the comfort level of the first user according to the behavior information of the first user in the process of writing on the screen, and determining that the comfort level is lower than a preset comfort level;
specifically, the first user is a writer with existing graphic contents, most of the situations are the main speaker of the current conference or teaching activity, the target content is the graphic contents written by the first user on a screen, the comfort level is the experience of the first user in the writing process, the first user can be obtained according to the behavior information analysis and can be classified as normal, comfortable or uncomfortable, and the comfort level is used for reflecting the writing experience of the first user in the writing process;
the behavior information comprises time length and/or limb actions of a first user in the process of writing content, and the comfort level of the first user is scored according to the time length and/or limb actions of the first user in the process of writing content, wherein the limb actions comprise one or more of bending, low head and squatting;
It is worth to say that, the pause time is the pause time of the first user in the process of writing the content, generally, in the process of continuously writing the first user on the screen, a short pause exists between the previous stroke and the next stroke, a short pause exists in the process of writing, the pause time between strokes is smaller than the pause time in the process of line changing, so that the preset pause time is set, and the preset pause time is longer than the pause time in the process of line changing, so that in actual application, two situations exist, namely, the pause time of line changing is longer than the preset pause time due to the fact that writing discomfort exists in the first user, the target content written by the first user is temporarily ended, and a long time is needed to be interrupted, and the two situations are suitable for the method, namely, the two situations can be used for detecting the comfort degree of the first user through the method, and the aim of freeing a target area is achieved; the limb action is used for judging whether the first user repeatedly generates various actions which are easy to cause fatigue, such as bending down, lowering the head, squatting up and the like, and under normal conditions, in the process of writing the image-text content on the screen, the comfortable writing area is used for writing the image-text content data, so that the user can continuously complete writing only by repeatedly bending down and lowering the head in the subsequent writing process, and the limb action is used as a reference item for judging the comfort level of the first user; in practical application, the behavior information is obtained through corresponding equipment, for example, the pause time can be obtained through a touch sensor or a pressure sensor, and the limb actions can be obtained directly through a camera or obtained through analysis according to video information;
Further, according to the pause time of the first user in the writing process and the limb action, calculating the score, deducting the corresponding score when the pause time exceeds the preset pause time or the limb action occurs, judging that the comfort of the first user is lower than the preset comfort when the score is lower than the score of the preset comfort, wherein the score of the preset comfort is obtained according to historical experience, the method can be flexible and changeable in the application process, more different dimensions can be selected to evaluate the comfort of the user, and the two dimensions provided by the scheme respectively judge the comfort of the first user from subjective and objective dual angles, so that the judgment accuracy is very high.
With reference to the first aspect, in a further alternative, before determining the area to be moved of the target content according to the target content written by the first user on the screen and the position of the target content in the screen, after determining that the comfort level is lower than the preset comfort level, the method further includes:
outputting prompt information;
receiving indication information aiming at the prompt information;
specifically, after the first user determines that the comfort level is lower than the preset comfort level, a prompt message is output to the first user, wherein the prompt message includes, but is not limited to, whether to inquire whether to move target content and an instruction option, the instruction option is used for the first user to select whether to move, the situation that the target content is not allowed to move directly is avoided, the prompt mode of the prompt message is not unique, and may be popup window display or voice prompt, the prompt message is output to a use interface of the first user, particularly in a cloud video conference, a participant is in a virtual conference scene, the seen interface is a shared interface put in by the first user, but because the prompt message may affect the participant to view the graphic content of the shared interface, the prompt message is only output to the use interface of the first user, but does not appear on the shared interface that the participant can view, but when the application scene is a screen of a screen conference or a screen watched by the participant is directly the use interface of the first user, the prompt message can be output to a region which does not affect the viewing.
In combination with the first aspect, in yet another alternative scheme, a region to be moved is determined according to the target content and the position of the target content in the screen, and the region to be moved is determined in a personalized manner, so that the requirement of a user is met;
specifically, the distance between lines of the target content is obtained by analyzing the target content and the position of the target content in the screen, the height of the first user writing content is determined according to the distance between lines and the preset line number, optionally, the height of the first user writing content is used as the height of the first user target area, the height of the target area is related to the height and the pen-down position of the first user according to an actual application scene, in general, the height of a comfortable area, namely the target area, when a person stands to write is the distance from the forehead to the lower abdomen, so the line number of the writing content in the height is generally 4-7 lines, besides the preset line number obtained according to the above process, the preset line number can also be obtained according to the history writing habit of the current user, or a preset value obtained by referring to the writing habit of other history users can be used for defining the approximate height of the target area, or a value set by the user can be used for obtaining the minimum rectangular area surrounding the target area according to the height of the target area, in the vertical and horizontal directions of the minimum rectangular area, and the horizontal width of the minimum rectangular area is determined in the horizontal and horizontal width of the minimum rectangular area is the minimum horizontal area;
And determining a region to be moved of the target content according to the target region, so that the target content is moved to one side of the target region, wherein the region to be moved recommended to a user is firstly positioned at the left side of the target region, secondly positioned above and finally positioned at the right side or the lower side of the target region according to the writing habit of a regular person from left to right and from top to bottom.
With reference to the first aspect, in yet another optional aspect, determining, according to a large amount of data, the target content and a region to be moved corresponding to a position of the target content in the screen;
specifically, each set of data includes written content, a correspondence between a position of the written content in a screen and a movement area of the written content; the method comprises the steps of presetting a writing area library, wherein the writing area library comprises a large amount of information data of writing contents of historical users in historical scenes, positions of the writing contents in a screen and moving areas of the writing contents, the information data correspond to each other, the target contents written by the first user and the positions of the target contents in the screen are input into the writing area library, the moving areas corresponding to the positions of the target contents and the positions of the target contents in the screen can be obtained, and the moving areas are used as the areas to be moved of the target contents written by the first user.
The writing area library is generally composed of historical information data generated during early test, and can also be composed of historical information data generated when the first user uses the writing area library in a historical scene;
the scheme is that the more historical data accumulated in a writing area library is, the more the optimal solution of the area to be moved in the historical scene is found, and the more the method or the device has universality.
With reference to the first aspect, in yet another optional aspect, inputting target content written by a first user on a screen and a position of the target content in the screen into a region prediction model to obtain a region to be moved of the target content;
specifically, the region prediction model is a model trained according to a large amount of data, each group of data in the plurality of groups of data comprises written content, the position of the written content in a screen and a moving region of the written content, the positions of the written content in the screen belong to characteristic data, and the moving region belongs to tag data;
the model is a neural network model, based on the self-learning capacity of artificial intelligence, a large amount of data is input into the region prediction model to obtain the moving region, namely training is performed, and the region prediction model predicts more accurately as the training amount is larger.
According to the method, the real writing experience of the first user in the writing process is obtained by detecting the writing comfort of the first user, whether the first user really needs to move the target content is judged through the real writing experience, when the comfort of the first user is lower than the preset comfort, the position of the image-text in the target area in the screen is moved, so that the first user can write in the target area again, fatigue-prone actions are avoided, and the experience of the first user is improved.
With reference to the first aspect, in yet another alternative, there is provided a method for adjusting a position of a graphic, the method including:
and determining a region to be moved of the target content according to the target content written by the first user on the screen, the position of the target content in the screen and the behavior information of the second user.
Specifically, the method comprises the steps that after an initial area to be moved of target content is determined according to the target content written by a first user on a screen and the position of the target content in the screen, namely, after the initial area to be moved is determined, whether the initial area to be moved meets the requirement of a second user or not is judged;
After the initial area to be moved is obtained, judging whether the initial area to be moved of the target content meets the requirement of the second user according to the behavior information of the second user, wherein the second user is other watching staff except the first user, and the watching experience of the second user is greatly influenced in normal meetings or teaching activities if the situation that the pictures are blocked, the pictures are unclear or the pictures are too dense, so that the watching experience of the second user is used as an important reference item for analyzing the area to be moved;
the behavior information of the second user comprises action information and language information, the action information comprises one or more of limb actions and head actions, the language information comprises voice or text information of the second user, and the action information and the language information are used for judging whether the initial area to be moved meets the requirements of the second user or not;
if the language information of the second user contains words which are not clearly seen, blocked or contain related position adjustment (such as moving leftwards), judging whether the initial area to be moved meets the requirement of the second user, if the area to be moved of the target content meets the requirement of the second user, determining the initial area to be moved of the target content, if the area to be moved of the target content does not meet the requirement of the second user, if the second user clearly proposes to move the content leftwards, and if the initial area to be moved is positioned on the right side of the target area, restarting determining the area to be moved, and excluding the area where the initial area to be moved is positioned.
By adopting the method of the scheme, more consideration is given to the viewing experience of other viewers except the first user, and because the conference or teaching activities are the information exchange process, generally, the first user is the information transmitter or the information output person, and the second user is the information receiver, so that the second user except the first user is influenced by the image-text position, and the second user is also taken as the reference item of the area to be moved in the humanized aspect, so that the viewing experience of the second user is improved.
In a second aspect, an embodiment of the present application provides an apparatus for adjusting a position of a graphic, where the apparatus includes at least a first determining unit and a mobile unit. The image-text position adjusting device is used for realizing the method described in any embodiment of the first aspect, and comprises the following steps:
a first determining unit, configured to determine a region to be moved of a target content according to the target content written by a first user on a screen and a position of the target content in the screen;
and the mobile unit is used for moving the target content to the area to be moved so as to vacate a target area on the screen, wherein the target area belongs to an area conforming to the writing habit of the first user, which is determined according to the history writing area information of the first user.
It can be understood that, through the first determining unit, the target content and the area to be moved corresponding to the position of the target content in the screen are obtained, and the moving unit can move the target content to the area to be moved in a sliding, replacing or other manner, so that the target area is completely vacated or vacated, and the first user can still write in the area conforming to the writing habit of the first user.
With reference to the second aspect, in yet another alternative, the apparatus further includes:
the detection unit is used for detecting the comfort level of the first user according to the behavior information of the first user in the process of writing on the screen;
and the second determining unit is used for determining that the comfort level is lower than a preset comfort level.
It can be understood that the detection unit detects the comfort level of the first user according to the behavior information of the first user, and determines whether the first user needs to move the target content from the in-person experience of the first user, and the second determination unit performs taking care and determining on the comfort level result of the user.
With reference to the second aspect, in yet another alternative, the detection unit is specifically configured to:
And scoring the comfort level of the first user according to the time length and/or limb movement of the first user when the first user pauses in the process of writing the content.
It can be understood that after the behavior information of the first user is obtained, the pause duration and the limb actions reflected in the behavior information are recorded and scored, and accumulated deduction is performed when corresponding actions occur until the comfort score is lower than the score of the preset comfort level, which is illustrated below, the comfort level score of the first user is full of one percent, the score of the preset comfort level is 90, the pause duration exceeds the preset pause duration by deducting the comfort level score of the first user by 2, and action deduction 3 which is easy to fatigue occurs once, so that the comfort level of the first user is strictly controlled.
With reference to the second aspect, in yet another alternative, the apparatus further includes:
the output unit is used for outputting prompt information when the comfort level of the first user is lower than the preset comfort level;
and the receiving unit is used for receiving the adjustment instruction information input by the user.
It can be appreciated that the output unit and the receiving unit are an important step in human-computer interaction between the user and the device, so as to avoid the trouble of moving the target content directly without permission to the user.
With reference to the second aspect, in yet another alternative, the first determining unit is specifically configured to:
analyzing the target content and the position to obtain the line spacing of the target content;
determining the height of the first user writing content according to the line spacing and the preset line number;
determining the target area according to the height of the writing content;
and determining the area to be moved of the target content according to the target area.
It can be understood that the first determining unit analyzes and determines the position of the target area, so as to obtain a more reasonable result and meet the requirements of users.
With reference to the second aspect, in yet another alternative, the first determining unit is specifically configured to:
and determining the target content and a region to be moved corresponding to the position of the target content in the screen according to multiple groups of data, wherein each group of data comprises writing content, a corresponding relation among the position of the writing content in the screen and the moving region of the writing content.
It can be understood that the first determining unit includes a writing area library, the first determining unit can automatically allocate an area to be moved corresponding to the target content and the position of the target content in the screen, find an optimal solution in a history scene, the more history data accumulated in the writing area library, the more universal the method and the device, and the more history data of the current user in the writing area library, the more targeted the method and the device.
With reference to the second aspect, in yet another alternative, the first determining unit is specifically configured to:
and inputting target content written by a first user on a screen and the position of the target content in the screen into a region prediction model to obtain a region to be moved of the target content.
It can be understood that the region prediction model is a model obtained by training according to multiple sets of data, each set of data in the multiple sets of data comprises written content, a position of the written content in a screen and a moving region of the written content, the positions of the written content in the screen belong to characteristic data, the moving region belongs to tag data, the region prediction model is a neural network model and is generated based on the self-learning capability of artificial intelligence, the moving region is obtained by inputting a large amount of data in the region prediction model, the greater the training amount is, and the more accurate the region prediction model predicts.
In a third aspect, an embodiment of the present application provides an image-text position adjustment apparatus, where the image-text position adjustment apparatus includes a processor, a memory, and a communication interface; a memory having a computer program stored therein; the communication interface is for transmitting and/or receiving data when the processor executes a computer program, and the teletext position adjustment arrangement performs the method described in the first aspect.
The processor included in the adjustment apparatus described in the third aspect may be a processor dedicated to performing the methods (referred to as a special purpose processor for convenience), or may be a processor that performs the methods by calling a computer program, such as a general purpose processor. In the alternative, the at least one processor may also include both special purpose and general purpose processors.
Alternatively, the above-mentioned computer program may be stored in a memory. For example, the Memory may be a non-transitory (non-transitory) Memory, such as a Read Only Memory (ROM), which may be integrated on the same device as the processor, or may be separately disposed on different devices, and the type of the Memory and the manner in which the Memory and the processor are disposed in the embodiments of the present application are not limited.
In a possible embodiment, the at least one memory is located outside the teletext position adjustment arrangement.
In a further possible embodiment, the at least one memory is located within the teletext position adjustment arrangement.
In a further possible embodiment, a part of the memory of the at least one memory is located inside the teletext position adjustment device and another part of the memory is located outside the teletext position adjustment device.
In this application, the processor and the memory may also be integrated in one device, i.e. the processor and the memory may also be integrated together.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having a computer program stored therein, which when executed on at least one processor, implements the method described in any of the preceding aspects.
In a fifth aspect, the present application provides a computer program product comprising a computer program for implementing the method of any one of the preceding aspects when said program is run on at least one processor.
Alternatively, the computer program product may be a software installation package, which may be downloaded and executed on a computing device in case the aforementioned method is required.
The technical solutions provided in the third to fifth aspects of the present application may refer to the beneficial effects of the technical solutions of the first aspect, and are not described herein again.
Drawings
The drawings that are used in the description of the embodiments will be briefly described below.
Fig. 1 is a schematic architecture diagram of a conference device according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an image-text position adjustment device according to an embodiment of the present application;
fig. 3 is a flow chart of an image-text position adjustment method provided in an embodiment of the present application;
fig. 4 is a schematic diagram of an embodiment of a method for adjusting a position of an image and text according to an embodiment of the present application;
fig. 5 is a schematic diagram of another embodiment of a method for adjusting a position of a graphic text according to an embodiment of the present application;
fig. 6 is a schematic diagram of still another embodiment of an apparatus for adjusting a position of a graphic text according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an image-text position adjusting device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
The terms "first," "second," "third," and "fourth" and the like in the description and in the claims of this application and in the drawings, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
The following describes a system architecture applied to the embodiment of the present application. It should be noted that, the system architecture and the service scenario described in the present application are for more clearly describing the technical solution of the present application, and do not constitute a limitation on the technical solution provided in the present application, and those skilled in the art can know that, with the evolution of the system architecture and the appearance of the new service scenario, the technical solution provided in the present application is also applicable to similar technical problems.
Referring to fig. 1, fig. 1 is a conference system 10 provided in an embodiment of the present application, where the conference system includes a graphics context position adjustment device 101, a cloud platform 102, and other devices 103, where:
the cloud platform 102 may be a server or a server cluster formed by a plurality of servers, and the cloud platform 102 can establish communication connection with a plurality of user using devices, so that information data of related text or video sent by each user can be received. Optionally, the cloud platform 102 may be a conference cloud platform, when the conference starts to the conference is terminated, multiparty information exchange generated during the conference is performed on the platform, or the cloud platform 102 may be a transfer station for information collection, when the behavior information of the user needs to be collected, the cloud platform initiates a request to related devices of each user, and finally the behavior information of the user is gathered together and transmitted to the image-text position adjustment device 101, or the video information and image-text content related to the conference are stored by the cloud platform instead. Therefore, the cloud platform 102 can obtain relevant data of many users, such as writing content, writing position, and the like, and can also accumulate historical behavior data of many users.
The other devices 103, specifically user using devices, may include one or more computers, where a display, a camera, a microphone, and other common devices are optionally added on the computer, and are used to communicate with a meeting initiator, and optionally, during a meeting, the other devices 103 may collect video or voice information of the user, and send the information to the cloud platform 102 or directly send the information to the image-text position adjustment device 101, where the video or voice information of the user may be used to evaluate the viewing experience of the user of the other devices 103.
The image-text position adjusting device 101 is specifically a using device of a first user, and the device is a device with data processing capability, wherein the device is configured with a screen, or the device is externally connected with the screen through a data line or other modes, the image-text position adjusting device 101 comprises a memory 201, a processor 202 and a screen 203, the memory 201, the processor 202 and the screen 203 can be connected with each other through a bus, can also be connected with each other through a wireless mode, and can be an electronic whiteboard, can also be a virtual whiteboard in a cloud desktop, and can also be a display, as shown in fig. 2 specifically; in an alternative solution, the image-text position adjusting device 101 further includes a communication interface and a user interface, where the communication interface is used to send and/or receive data, and the user interface is used to connect with a component related to information collection, and when the behavior information of the user needs to be collected, other devices connected with the user interface operate to collect related video information.
On the basis of the above architecture, the application scene is a video conference, when the conference starts, a certain user is used as an initiator of the conference, video and voice information are sent to other users, the video comprises graphic content on a screen, the graphic content can be existing or can be written in the conference, in the conference process, other users optionally send video, voice information or text information to a cloud platform, in order to facilitate understanding, the scene can be alternatively regarded as a video conference widely applied in modern work, only the screen content shared by all users in the conference is changed into graphic content on an electronic whiteboard by a cloud desktop, so when the graphic content is written on the screen by a first user, namely a host, the graphic content is obtained through the cloud platform, or the behavior information of the first user is directly obtained through other devices 103, and is sent to graphic position adjusting equipment 101, a relevant unit of the graphic position adjusting equipment 101 detects the comfort level of the first user, and judges whether the first user needs to adjust the position or not through analysis of the relevant unit, when the position of the first user needs to be adjusted, the graphic position of the first user needs to be adjusted, and the graphic position of the graphic position adjusting equipment 101 needs to be adjusted, and the graphic content is required to be moved to be adjusted when the relevant unit is required to be moved, and the graphic content is required to be adjusted.
In the above-described scene application process, the processor 202 in the teletext position adjustment arrangement 101 invokes a computer program in the memory 201, thereby implementing a method embodiment as shown in fig. 3 later.
Referring to fig. 3, fig. 3 is a flow chart of a method for adjusting a graphics and text position according to an embodiment of the present application, where the method for adjusting a graphics and text position may be implemented based on the conference system shown in fig. 1, or may be implemented based on other architectures, and the method includes, but is not limited to, the following steps:
step S301: and detecting the comfort degree of the first user according to the behavior information of the first user in the process of writing on the screen.
Specifically, the first user is a user who writes on the screen, typically a presenter, the scene and the equipment applied by the method do not have excessive restrictions, the screen can be a display, a cloud desktop, a whiteboard in the cloud desktop, or an entity electronic whiteboard, the writing mode of the first user on the screen is not restricted, the first user can input by finger touch, keyboard spelling, or through media such as a whiteboard pen, and even possibly voice input, the behavior information includes duration and/or limb actions of pause of the first user in the process of writing content, and the comfort level is used for reflecting the writing experience of the first user in the writing process;
The comfort level detection specifically is to score the comfort level of the first user according to the time length and/or limb action of the first user in the process of writing content, the time length of the pause is the time length of the pause of the first user in the process of writing content, generally, a short pause exists between the previous stroke and the next stroke in the process of continuously writing on a screen, the time length of the pause between the previous stroke and the next stroke is smaller than the time length of the pause in the process of line changing, so that the preset time length is set, the preset time length of the pause is longer than the time length of the pause in the process of line changing, in one possible implementation mode, the preset time length of the pause is 350ms, and the preset time length of the pause is used for evaluating whether the time length of the pause is longer than the time length of the common line changing due to large action or experience discomfort of the first user in the process of line changing.
The method has wide application scene, can be applied to video cloud conferences, screen projection conferences or teaching organizations, has image acquisition devices in the scenes, and can present video images of all participants in the video cloud conferences in one possible implementation mode, so that limb actions of a first user can be analyzed through the existing video images, and whether the first user repeatedly generates various actions which are easy to cause fatigue, such as bending over, lowering the head, squatting up and the like, can be judged, and the limb actions of the first user are taken as important components of the comfort level of the first user, wherein the comfort level is used for reflecting the writing experience of the first user in the writing process.
In one possible implementation manner, score calculation is performed on the pause duration and the limb action, whether the comfort level is lower than the preset comfort level is determined, when the pause duration exceeds the preset pause duration or the limb action occurs, a corresponding score is deducted, and when the score is lower than the score of the preset comfort level, the comfort level of the first user is determined to be lower than the preset comfort level.
Step S302: and outputting prompt information.
Specifically, the output mode is not unique, prompt information can be output to the user in a popup window or voice prompt mode, the prompt information is used for inquiring whether to move target content, the prompt information comprises text information for inquiring whether to move target content and instruction options, the target content is the content which is written by the first user on a screen, the instruction options are used for the first user to select whether to move, and the mode of the instruction options for the user to select is not unique, and the prompt information can be realized according to a function module of the current equipment, such as sound control or touch.
In a possible implementation manner, the prompt message includes two instruction options, including yes and no, and when a first user touches the instruction option displayed as yes, it is determined that the first user needs to move the target content, and step S303 is performed, and when the first user touches the instruction option displayed as no, comfort detection is performed on the first user again.
And outputting the prompt information to a region which does not influence the watching experience of the viewers, such as a blank region on one side of the image-text content, if the use interface of the first user is consistent with the watching interface of the viewers.
Step S303: and receiving indication information aiming at the prompt information.
Specifically, the indication information is used for indicating to move the target content, when the first user selects to move the target content, corresponding indication information is generated and transmitted to the processor, the processor receives the indication information that the first user determines to move the target content and then proceeds to step S304, when the first user selects to not need to move the target content, corresponding rejection information is generated and transmitted to the processor, and the processor returns to step S301 after receiving the rejection information that the first user determines not to need to move the target content.
Step S304: and determining a region to be moved of the target content according to the target content written by the first user on the screen and the position of the target content in the screen.
Specifically, the target content includes one or more of text, patterns, symbols and flowcharts, and the area to be moved of the target content can be analyzed and determined after the indication information is received, that is, the area to be moved of the target content is determined mainly according to the target content written by the first user on the screen and the position of the target content in the screen, but it is also possible to determine the area to be moved of the target content according to other information, where the specific case is as follows:
in a first case, determining a region to be moved of target content according to the target content written by a first user on a screen and the position of the target content in the screen, and the specific implementation manner is as follows:
the method comprises the following steps: referring to fig. 4, by analyzing the target content and the position of the target content in the screen to obtain the line spacing of the target content, determining the height of the first user writing content according to the line spacing and a preset line number, optionally, taking the height of the first user writing content as the height of the first user target area, wherein the preset line number refers to the writing habit of the historical user or is set according to the historical writing habit of the current user, and is usually 4-7 lines, or more or less can be set by the user in a personalized way, and here, the description is given by taking the preset line number as 6 line examples, it can be understood that the line spacing between 6 lines of the target content is obtained from top to bottom, and the height of 6 lines of graphic contents is calculated according to the obtained line spacing, and can be directly taken as the height of the target area;
Obtaining a minimum rectangular area surrounding the writing content according to the height of the writing content, and determining the minimum rectangular area as a target area, wherein in a possible implementation manner, the height of the writing content is used as the longitudinal height of the minimum rectangular area, the transverse width of the minimum rectangular area is determined according to the transverse widths of the left end and the right end of the target content, and finally the area of the minimum rectangular area and the position in a screen are determined; determining a region to be moved of the target content according to the target region, wherein the region to be moved is positioned above or on the left side of the target region according to writing habits of a regular person;
further, the area to be moved of the target content is determined according to the target area, so that the target content is moved to one side of the target area, wherein the area to be moved recommended to the user is firstly positioned at the left side of the target area, secondly positioned above the target area and finally positioned at the right side or the lower side of the target area according to the writing habit of a regular person from left to right and from top to bottom.
In a possible implementation manner, when the shape of the target area where the target content is located is defined, the shape is not limited to a rectangle, and the shape of the target area may be determined according to the shape of a flowchart or a pattern, and may be a circle, a triangle or a quadrilateral.
The second method is as follows: determining the target content and the area to be moved corresponding to the position of the target content in the screen according to multiple groups of data,
specifically, each set of data includes written content, a correspondence between a position of the written content in a screen and a movement area of the written content;
specifically, in one possible implementation manner, a writing area library is established, the writing area library contains a large number of information data corresponding to each other among writing contents of historical users in a historical scene, positions of the writing contents in a screen and moving areas for determining the writing contents, a corresponding moving area can be obtained by inputting target contents written by the first user and positions of the target contents in the screen into the writing area library, the moving area is used as a to-be-moved area of the target contents written by the first user, and the writing area library is generally composed of historical information data generated in a previous test and can also be composed of historical information data generated in a historical scene of the first user;
in the following description, a coordinate system is set up on a screen, the screen is divided into three rows and three columns, coordinate points from left to right are 1, 2 and 3, coordinate points from top to bottom are 1, 2 and 3, and the positions of a target area and an area to be moved are described by the coordinate system shown in table 1 in this case:
TABLE 1
(1.1) (2.1) (3.1)
(1.2) (2.2) (3.2)
(1.3) (2.3) (3.3)
One possible implementation of the writing area library is shown in table 2:
TABLE 2
And inputting the target content written by the first user and the position of the target content in the screen into the writing area library, namely obtaining a moving area corresponding to the target content written by the first user and the position of the target content in the screen under the history condition, and taking the moving area as an area to be moved of the target content written by the first user, for example, the target content of the first user is a plain text, the position of the target area is an area with coordinates (1.2) on the screen, and the writing area library corresponding unit outputs an area with the corresponding area to be moved as the area with coordinates (1.1).
In yet another possible implementation manner, the region to be moved obtained through the writing region library this time is taken as sample data and placed into the writing region library.
And a third method: inputting target content written by a first user on a screen and the position of the target content in the screen into a region prediction model to obtain a region to be moved of the target content, wherein the region prediction model is a model obtained by training according to multiple groups of data, each group of data in the multiple groups of data comprises writing content, the position of the writing content in the screen and a moving region of the writing content, the writing content and the position of the writing content in the screen belong to characteristic data, the moving region belongs to tag data, the model is a neural network model and is obtained by training based on the principles of artificial intelligence self-learning and big data, the greater the training amount is, the more accurate the region prediction model predicts, the more general applicability is achieved by the model after the great amount of training, in one possible implementation mode, the region prediction model obtained by training of the current user data is more specific, the habit of the current user is more met, the predicted region is obtained by training the current user data, and the predicted region is the region to be trained in one possible implementation mode;
In the second case, in addition to determining a region to be moved of the target content according to the target content written by the first user on the screen and the position of the target content in the screen, the region to be moved may be determined according to the type of the target content;
specifically, when the first user does not pass the comfort detection and the target content written by the first user is only a flowchart, the shape of each flowchart drawn by the user is identified based on a flowchart library, and after layout is performed according to the shape of each flowchart, a suggested area to be moved is generated, in a possible implementation manner, the overall structure of the flowchart written by the first user is of an "L" type, and when the area to be moved is determined, the flowchart is translated to one side of the target area in consideration of the overall structure of the flowchart, so that the long side of the "L" type is aligned with one side of the screen, the structure of the image-text content in the screen is optimized, and the aesthetic property is improved.
And in a third case, determining a region to be moved of the target content according to the target content written by the first user on the screen, the position of the target content in the screen and the behavior information of the second user.
Specifically, the step is performed after determining a region to be moved of the target content, which is an initial region to be moved, according to the target content written by the first user on the screen, and the position of the target content in the screen;
after the initial area to be moved is obtained, judging whether the initial area to be moved of the target content meets the requirement of the second user according to the behavior information of the second user, wherein the second user is other watching personnel except the first user, and the watching experience of the second user is greatly influenced if the conditions of shielding pictures, unclear pictures and texts or too dense pictures and texts occur in a normal conference, so that the watching experience of the second user is used as an important reference item for analyzing the area to be moved;
the behavior information comprises action information and language information, the action information comprises one or more of limb actions and head actions, the language information comprises voice or text information of a second user, and the action information and the language information are used for judging whether the initial area to be moved meets the requirements of the second user or not;
if the language information of the second user includes words which are not clearly seen, blocked or include related position adjustment (e.g. move leftwards), judging whether the initial area to be moved meets the requirement of the second user, if the area to be moved of the target content meets the requirement of the second user, determining the initial area to be moved as the area to be moved of the target content, if the area to be moved of the target content does not meet the requirement of the second user, if the second user explicitly proposes to move the content leftwards, and if the initial area to be moved is located on the right side of the target area, restarting determining the area to be moved, and excluding the area where the initial area to be moved is located.
If the language information of the second user does not contain words which are invisible, blocked or contain related position adjustment (such as moving leftwards), the initial area to be moved is directly determined as the area to be moved of the target content.
Step S305: and moving the target content to the area to be moved so as to vacate a target area on the screen.
Specifically, the target area belongs to an area according to the writing habit of the first user, and the target content is moved to other areas, so that the first user can still write in the area according to the writing habit of the first user when carrying out subsequent writing, and the specific possible implementation manner is as follows:
case 1: referring to fig. 5, the writing of the first user on the screen is temporarily finished, so that the target content written by the first user is directly moved to the area to be moved, so that the first user can still write in the target area in the subsequent writing process, and the comfort level of the first user in writing is ensured.
Case 2: referring to fig. 6, the first user still writes in an uncomfortable gesture after determining the area to be moved, so that writing contents of the first user exist outside the target area, after moving target contents in the target area to the area to be moved, moving target contents outside the target area to the target area, so that the first user can still write in the target area in the subsequent writing process, and the semantically connected image-text contents are not too far apart, thereby avoiding bad reading experience of viewers.
Case 3: the first user's target content has been changed for a plurality of times, which represents that a plurality of target content corresponding to the to-be-moved areas have been generated, when the existing area is applied as the movement area, and another to-be-moved area corresponding to another target content is determined again, the applied area is excluded, and a blank area is selected as the another to-be-moved area.
According to the method, the real writing experience of the first user in the writing process is obtained through detecting the writing comfort of the first user, whether the first user really needs to move the target content is judged through the real writing experience, when the comfort of the first user is lower than the preset comfort, the position of the image-text in the target area in the screen is moved, the first user can write in the target area again, the generation of fatigue-prone actions is avoided, and the experience of the first user is improved.
The foregoing details the method of embodiments of the present application, and the apparatus of embodiments of the present application is provided below.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an apparatus 70 for adjusting a graphics and text position according to an embodiment of the present application, where the apparatus 70 may include a first determining unit 701 and a moving unit 702, and the details of the respective units are as follows.
A first determining unit 701, configured to determine a region to be moved of a target content written by a first user on a screen according to the target content and a position of the target content in the screen;
a moving unit 702, configured to move the target content to the area to be moved, so as to blank a target area on the screen, where the target area belongs to an area conforming to the writing habit of the first user, which is determined according to the history writing area information of the first user;
it can be understood that, the first determining unit 701 obtains the target content and the area to be moved corresponding to the position of the target content in the screen, and the moving unit 702 can move the target content to the area to be moved in a sliding, replacing or other manner, so that the target area is completely vacated or vacated, and the first user can still write in the area conforming to the writing habit of the first user.
In yet another alternative, the apparatus further comprises:
the detection unit is used for detecting the comfort level of the first user according to the behavior information of the first user in the process of writing on the screen;
and the second determining unit is used for determining that the comfort level is lower than a preset comfort level.
It can be understood that the detection unit detects the comfort level of the first user according to the behavior information of the first user, and determines whether the first user needs to move the target content from the in-person experience of the first user, and the second determination unit performs taking care and determining on the comfort level result of the user.
In yet another alternative, the detection unit is specifically configured to:
and scoring the comfort level of the first user according to the time length and/or limb movement of the first user when the first user pauses in the process of writing the content.
It can be understood that after the behavior information of the first user is obtained, the pause duration and the limb actions reflected in the behavior information are recorded and scored, and accumulated deduction is performed when corresponding actions occur until the comfort score is lower than the score of the preset comfort level, which is illustrated below, the comfort level score of the first user is full of one percent, the score of the preset comfort level is 90, the pause duration exceeds the preset pause duration by deducting the comfort level score of the first user by 2, and action deduction 3 which is easy to fatigue occurs once, so that the comfort level of the first user is strictly controlled.
In yet another alternative, the apparatus further comprises:
the output unit is used for outputting prompt information when the comfort level of the first user is lower than the preset comfort level;
and the receiving unit is used for receiving the adjustment instruction information input by the user.
It can be appreciated that the output unit and the receiving unit are an important step in human-computer interaction between the user and the device, so as to avoid the trouble of moving the target content directly without permission to the user.
In yet another alternative, the first determining unit is specifically configured to:
analyzing the target content and the position to obtain the line spacing of the target content;
determining the height of the first user writing content according to the line spacing and the preset line number;
determining the target area according to the height of the writing content;
and determining the area to be moved of the target content according to the target area.
It can be understood that the first determining unit analyzes and determines the position of the target area, so as to obtain a more reasonable result and meet the requirements of users.
In yet another alternative, the first determining unit is specifically configured to:
and determining the target content and a region to be moved corresponding to the position of the target content in the screen according to multiple groups of data, wherein each group of data comprises writing content, a corresponding relation among the position of the writing content in the screen and the moving region of the writing content.
It can be understood that the first determining unit includes a writing area library, the first determining unit can automatically allocate an area to be moved corresponding to the target content and the position of the target content in the screen, find an optimal solution in a history scene, the more history data accumulated in the writing area library, the more universal the method and the device, and the more history data of the current user in the writing area library, the more targeted the method and the device.
In yet another alternative, the first determining unit is specifically configured to:
and inputting target content written by a first user on a screen and the position of the target content in the screen into a region prediction model to obtain a region to be moved of the target content.
It can be understood that the region prediction model is a model obtained by training according to multiple sets of data, each set of data in the multiple sets of data comprises written content, a position of the written content in a screen and a moving region of the written content, the positions of the written content in the screen belong to characteristic data, the moving region belongs to tag data, the region prediction model is a neural network model and is generated based on the self-learning capability of artificial intelligence, the moving region is obtained by inputting a large amount of data in the region prediction model, the greater the training amount is, and the more accurate the region prediction model predicts.
Embodiments of the present application also provide a computer readable storage medium having a computer program stored therein, which when run on a processor, implements the method flow shown in fig. 3.
Embodiments of the present application also provide a computer program product for implementing the method flow shown in fig. 3 when the computer program product is run on a processor.
Those of ordinary skill in the art will appreciate that implementing all or part of the above-described embodiment methods may be accomplished by a computer program in hardware associated with the computer program, which may be stored on a computer readable storage medium, which when executed may comprise the above-described embodiment methods. And the aforementioned storage medium includes: various media capable of storing computer program code, such as ROM or random access memory RAM, magnetic or optical disk.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly understand that the embodiments described herein may be combined with other embodiments.
As used in this specification, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, either hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between 2 or more computers. Furthermore, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from two components interacting with one another in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal).

Claims (7)

1. The image-text position adjusting method is characterized by comprising the following steps of:
determining a region to be moved of target content according to the target content written by a first user on a screen and the position of the target content in the screen;
Moving the target content to the area to be moved so as to vacate a target area on the screen, wherein the target area belongs to an area conforming to the writing habit of the first user, which is determined according to the history writing area information of the first user;
wherein the determining the area to be moved of the target content according to the target content written by the first user on the screen and the position of the target content in the screen comprises the following steps:
determining an initial area to be moved of target content according to the target content written by a first user on a screen and the position of the target content in the screen;
judging whether an initial area to be moved of the target content meets the requirement of a second user according to behavior information of the second user, wherein the second user is other watching staff except the first user; wherein the behavior information comprises action information and language information, the action information comprises one or more of limb action and head action, and the language information comprises voice or text information of the second user;
if the language information of the second user contains words which are not visible, blocked or contain related position adjustment, judging whether the initial area to be moved meets the requirements of the second user or not;
If the to-be-moved area of the target content meets the requirement of the second user, determining the initial to-be-moved area as the to-be-moved area of the target content;
and if the to-be-moved area of the target content does not meet the requirement of the second user, restarting to determine the to-be-moved area, and excluding the area where the initial to-be-moved area is located.
2. The method of claim 1, wherein before determining the area to be moved of the target content based on the target content written by the first user on the screen and the position of the target content in the screen, further comprising:
detecting the comfort level of the first user according to the behavior information of the first user in the writing process on the screen, wherein the comfort level is used for reflecting the writing experience of the first user in the writing process;
and determining that the comfort level is lower than a preset comfort level.
3. The method according to claim 2, wherein before the determining the area to be moved of the target content according to the target content written by the first user on the screen and the position of the target content in the screen, after the determining that the comfort level is lower than a preset comfort level, further comprises:
Outputting prompt information, wherein the prompt information is used for inquiring whether to move the target content;
and receiving indication information aiming at the prompt information, wherein the indication information is used for indicating the movement of the target content.
4. A method according to claim 2 or 3, wherein said comfort detection of the first user from behavior information of the first user during writing on the screen comprises:
the comfort of the first user is scored based on the length of time the first user pauses during writing of the content and/or limb movements including one or more of bending down, lowering down, squatting up.
5. An image-text position adjusting device, characterized in that the device comprises:
a first determining unit, configured to determine a region to be moved of a target content according to the target content written by a first user on a screen and a position of the target content in the screen;
a moving unit, configured to move the target content to the area to be moved, so as to blank a target area on the screen, where the target area belongs to an area conforming to the writing habit of the first user, which is determined according to the history writing area information of the first user;
The first determining unit is specifically configured to:
determining an initial area to be moved of target content according to the target content written by a first user on a screen and the position of the target content in the screen;
judging whether an initial area to be moved of the target content meets the requirement of a second user according to behavior information of the second user, wherein the second user is other watching staff except the first user; wherein the behavior information comprises action information and language information, the action information comprises one or more of limb action and head action, and the language information comprises voice or text information of the second user;
if the language information of the second user contains words which are not visible, blocked or contain related position adjustment, judging whether the initial area to be moved meets the requirements of the second user or not;
if the to-be-moved area of the target content meets the requirement of the second user, determining the initial to-be-moved area as the to-be-moved area of the target content;
and if the to-be-moved area of the target content does not meet the requirement of the second user, restarting to determine the to-be-moved area, and excluding the area where the initial to-be-moved area is located.
6. A teletext position adjustment device, characterized in that the device comprises at least one processor, a communication interface for transmitting and/or receiving data, and a memory for storing a computer program, the at least one processor being arranged to invoke the computer program stored in the at least one memory, such that the device implements the method according to any one of claims 1-4.
7. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when run on a processor, causes the computer to perform the method according to any of claims 1-4.
CN202210795262.6A 2022-07-07 2022-07-07 Image-text position adjustment method, image-text position adjustment equipment and storage medium Active CN115167736B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210795262.6A CN115167736B (en) 2022-07-07 2022-07-07 Image-text position adjustment method, image-text position adjustment equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210795262.6A CN115167736B (en) 2022-07-07 2022-07-07 Image-text position adjustment method, image-text position adjustment equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115167736A CN115167736A (en) 2022-10-11
CN115167736B true CN115167736B (en) 2024-04-12

Family

ID=83492152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210795262.6A Active CN115167736B (en) 2022-07-07 2022-07-07 Image-text position adjustment method, image-text position adjustment equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115167736B (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101719036A (en) * 2009-12-11 2010-06-02 北京洲洋伟业信息技术有限公司 Method for inducing action of mouse by combining mobile position
CN106155371A (en) * 2015-03-25 2016-11-23 联想(北京)有限公司 Information processing method, device and electronic equipment
CN106445386A (en) * 2016-09-28 2017-02-22 广州视睿电子科技有限公司 Handwriting display method and device
CN106970681A (en) * 2017-02-21 2017-07-21 广州视源电子科技股份有限公司 Write display methods and its system
CN107562330A (en) * 2017-08-21 2018-01-09 广州视源电子科技股份有限公司 A kind of display methods of handwritten content, device, equipment and storage medium
CN107729298A (en) * 2017-10-31 2018-02-23 努比亚技术有限公司 Screen occlusion area processing method, mobile terminal and computer-readable recording medium
CN108509142A (en) * 2018-04-08 2018-09-07 广州视源电子科技股份有限公司 A kind of writing software exchange method, device, terminal device and storage medium
CN109032481A (en) * 2018-06-29 2018-12-18 维沃移动通信有限公司 A kind of display control method and mobile terminal
CN109271086A (en) * 2018-08-31 2019-01-25 湖南新云网科技有限公司 A kind of Writing method of electronic whiteboard, storage medium and electronic whiteboard
CN109343785A (en) * 2018-08-31 2019-02-15 湖南新云网科技有限公司 Writing method, storage medium and the electronic whiteboard of electronic whiteboard
CN109407898A (en) * 2018-08-31 2019-03-01 湖南新云网科技有限公司 A kind of display methods of electronic whiteboard, storage medium and electronic whiteboard
CN110941383A (en) * 2019-10-11 2020-03-31 广州视源电子科技股份有限公司 Double-screen display method, device, equipment and storage medium
CN111679779A (en) * 2020-05-09 2020-09-18 深圳市鸿合创新信息技术有限责任公司 Automatic paging method and device for electronic writing board, terminal and storage medium
CN111949169A (en) * 2020-06-30 2020-11-17 北京百度网讯科技有限公司 Application interface display method and device
CN112269515A (en) * 2020-11-12 2021-01-26 Oppo广东移动通信有限公司 Multi-window processing method and device on mobile terminal, mobile terminal and medium
CN112947824A (en) * 2021-01-28 2021-06-11 维沃移动通信有限公司 Display parameter adjusting method and device, electronic equipment and medium
CN113934356A (en) * 2019-10-09 2022-01-14 广州视源电子科技股份有限公司 Display operation method, device, equipment and storage medium of intelligent interactive panel
KR102383022B1 (en) * 2021-07-27 2022-04-08 (주)알이시스 Electronic blackboard with user's personal screen assignment
WO2022127767A1 (en) * 2020-12-18 2022-06-23 维沃移动通信有限公司 Writing display processing method, and electronic device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106293444B (en) * 2015-06-25 2020-07-03 小米科技有限责任公司 Mobile terminal, display control method and device
KR20190035341A (en) * 2017-09-26 2019-04-03 삼성전자주식회사 Electronic board and the control method thereof
WO2020080878A1 (en) * 2018-10-19 2020-04-23 주식회사 네오랩컨버전스 Electronic pen, electronic device, and controlling method therefor

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101719036A (en) * 2009-12-11 2010-06-02 北京洲洋伟业信息技术有限公司 Method for inducing action of mouse by combining mobile position
CN106155371A (en) * 2015-03-25 2016-11-23 联想(北京)有限公司 Information processing method, device and electronic equipment
CN106445386A (en) * 2016-09-28 2017-02-22 广州视睿电子科技有限公司 Handwriting display method and device
CN106970681A (en) * 2017-02-21 2017-07-21 广州视源电子科技股份有限公司 Write display methods and its system
CN107562330A (en) * 2017-08-21 2018-01-09 广州视源电子科技股份有限公司 A kind of display methods of handwritten content, device, equipment and storage medium
CN107729298A (en) * 2017-10-31 2018-02-23 努比亚技术有限公司 Screen occlusion area processing method, mobile terminal and computer-readable recording medium
CN108509142A (en) * 2018-04-08 2018-09-07 广州视源电子科技股份有限公司 A kind of writing software exchange method, device, terminal device and storage medium
CN109032481A (en) * 2018-06-29 2018-12-18 维沃移动通信有限公司 A kind of display control method and mobile terminal
CN109271086A (en) * 2018-08-31 2019-01-25 湖南新云网科技有限公司 A kind of Writing method of electronic whiteboard, storage medium and electronic whiteboard
CN109343785A (en) * 2018-08-31 2019-02-15 湖南新云网科技有限公司 Writing method, storage medium and the electronic whiteboard of electronic whiteboard
CN109407898A (en) * 2018-08-31 2019-03-01 湖南新云网科技有限公司 A kind of display methods of electronic whiteboard, storage medium and electronic whiteboard
CN113934356A (en) * 2019-10-09 2022-01-14 广州视源电子科技股份有限公司 Display operation method, device, equipment and storage medium of intelligent interactive panel
CN110941383A (en) * 2019-10-11 2020-03-31 广州视源电子科技股份有限公司 Double-screen display method, device, equipment and storage medium
CN111679779A (en) * 2020-05-09 2020-09-18 深圳市鸿合创新信息技术有限责任公司 Automatic paging method and device for electronic writing board, terminal and storage medium
CN111949169A (en) * 2020-06-30 2020-11-17 北京百度网讯科技有限公司 Application interface display method and device
CN112269515A (en) * 2020-11-12 2021-01-26 Oppo广东移动通信有限公司 Multi-window processing method and device on mobile terminal, mobile terminal and medium
WO2022127767A1 (en) * 2020-12-18 2022-06-23 维沃移动通信有限公司 Writing display processing method, and electronic device
CN112947824A (en) * 2021-01-28 2021-06-11 维沃移动通信有限公司 Display parameter adjusting method and device, electronic equipment and medium
KR102383022B1 (en) * 2021-07-27 2022-04-08 (주)알이시스 Electronic blackboard with user's personal screen assignment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SMART Board在常态课堂教学中的有效应用探析;岳宏伟;袁旭霞;;开放教育研究;20080605(第03期);89-93 *

Also Published As

Publication number Publication date
CN115167736A (en) 2022-10-11

Similar Documents

Publication Publication Date Title
CN102693047B (en) The method of numerical information and image is shown in cooperative surroundings
EP3769509B1 (en) Multi-endpoint mixed-reality meetings
EP2498485B1 (en) Automated selection and switching of displayed information
CN109407954B (en) Writing track erasing method and system
KR20170080538A (en) Content displaying method based on smart desktop and smart desktop terminal thereof
CN106909246B (en) Electronic writing and erasing method and intelligent touch television
CN106412232A (en) Scaling method and device for controlling operation interface, and electronic device
CN111836093B (en) Video playing method, device, equipment and medium
CN109697004B (en) Method, device and equipment for writing annotation by touch equipment and storage medium
CN112181171A (en) Input method and device of intelligent pen on intelligent display terminal and intelligent pen
CN111580903A (en) Real-time voting method, device, terminal equipment and storage medium
US20150301726A1 (en) Systems and Methods for Displaying Free-Form Drawing on a Contact-Sensitive Display
CN115167736B (en) Image-text position adjustment method, image-text position adjustment equipment and storage medium
JP6834197B2 (en) Information processing equipment, display system, program
CN109358799B (en) Method for adding handwritten annotation information input by user on handwriting equipment
JP6293903B2 (en) Electronic device and method for displaying information
US20220374188A1 (en) Electronic billboard and controlling method thereof
CN113934323A (en) Multi-point display method and device based on intelligent blackboard and terminal equipment
JP6699406B2 (en) Information processing device, program, position information creation method, information processing system
CN112565844B (en) Video communication method and device and electronic equipment
CN104516566A (en) Handwriting input method and device
CN113377220B (en) Information storage method and device
CN114741151B (en) Split screen display method and device, electronic equipment and readable storage medium
US11538503B1 (en) Information processing apparatus, information processing method, and non-transitory computer readable medium
WO2022206171A1 (en) Remote writing display method and apparatus, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant