CN111309211A - Picture processing method and device and storage medium - Google Patents

Picture processing method and device and storage medium Download PDF

Info

Publication number
CN111309211A
CN111309211A CN202010098716.5A CN202010098716A CN111309211A CN 111309211 A CN111309211 A CN 111309211A CN 202010098716 A CN202010098716 A CN 202010098716A CN 111309211 A CN111309211 A CN 111309211A
Authority
CN
China
Prior art keywords
target picture
target
presenting
session
collaborative editing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010098716.5A
Other languages
Chinese (zh)
Inventor
邵和明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010098716.5A priority Critical patent/CN111309211A/en
Publication of CN111309211A publication Critical patent/CN111309211A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/185Arrangements for providing special services to substations for broadcast or conference, e.g. multicast with management of multicast group membership
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Information Transfer Between Computers (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a picture processing method, a picture processing device, electronic equipment and a storage medium; the method comprises the following steps: presenting a session message containing a target picture through a session window of a group session; presenting an operation function key corresponding to the target picture, wherein the operation function key indicates a function entrance for performing collaborative editing on the target picture; responding to the click operation aiming at the operation function key, and presenting a collaborative editing page corresponding to the target picture; the collaborative editing page is used for at least two session members in the group session to edit the target picture and for presenting the edited target picture obtained synchronously; by the invention, the collaborative editing of the pictures in the group session can be realized, and the situation that messages aiming at the pictures are sent in the session window to cause message confusion when a plurality of people edit the pictures is avoided, thereby improving the processing efficiency of the pictures in the group session.

Description

Picture processing method and device and storage medium
Technical Field
The present invention relates to the field of internet technologies, and in particular, to a method and an apparatus for processing a picture, an electronic device, and a storage medium.
Background
With the rapid development of internet technology, more and more users complete the discussion and design of working schemes through instant messaging clients. In the related art, a user usually sends the content of a design scheme to a group session in a screenshot manner, so that people can publish opinions, discuss the scheme and the like. Other personnel feed back personal opinions or evaluations to the group conversation by sending annotated design pictures or sending instant messages, thereby realizing the discussion of the design scheme.
However, many people in group conversation cause problems of complicated information and inconspicuous important information when multiple people issue different opinions simultaneously; and if a plurality of design schemes are discussed simultaneously, the situation of information confusion is easily caused, so that the office efficiency of the user is greatly influenced, and the user experience is reduced.
Disclosure of Invention
Embodiments of the present invention provide a picture processing method and apparatus, an electronic device, and a storage medium, which can implement collaborative editing of pictures in a group session, and avoid a situation in which multiple persons edit pictures and send messages for the pictures in a session window to cause message confusion, thereby improving the processing efficiency of the pictures in the group session.
The technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides a picture processing method, which comprises the following steps:
presenting a session message containing a target picture through a session window of a group session;
presenting an operation function key corresponding to the target picture, wherein the operation function key indicates a function entrance for performing collaborative editing on the target picture;
responding to the click operation aiming at the operation function key, and presenting a collaborative editing page corresponding to the target picture;
and the collaborative editing page is used for at least two session members in the group session to edit the target picture and for presenting the edited target picture obtained synchronously.
An embodiment of the present invention further provides an image processing apparatus, including:
the first presentation module is used for presenting the session message containing the target picture through a session window of the group session;
the second presentation module is used for presenting operation function keys corresponding to the target picture, and the operation function keys indicate function entries for performing collaborative editing on the target picture;
the third presentation module is used for responding to the click operation aiming at the operation function key and presenting a collaborative editing page corresponding to the target picture;
and the collaborative editing page is used for at least two session members in the group session to edit the target picture and for presenting the edited target picture obtained synchronously.
In the above solution, the second presentation module is further configured to respond to a click operation on the session message including the target picture;
presenting the target picture, and presenting the operation function key in the presented target picture.
In the foregoing solution, the second presenting module is further configured to present the operation function key corresponding to the target picture through the session window of the group session.
In the above scheme, the third presenting module is further configured to present the collaborative editing page including the annotation function key;
wherein the labeling function key indicates a function entry for labeling a target part in the target picture.
In the foregoing solution, the third presenting module is further configured to present, in response to a click operation on the annotation function key, an area selection box for selecting an area corresponding to the target portion;
receiving a region selection instruction aiming at the target part and triggered based on the region selection box, and presenting a first content input box corresponding to the target part;
the first content input box is used for inputting and displaying first content related to the target part so as to realize annotation aiming at the target part.
In the foregoing solution, the third presenting module is further configured to present, in response to a click operation on the annotation function key, a function icon for identifying that the target portion has a corresponding annotation;
and presenting a second content input box associated with the functional icon in response to the marking instruction aiming at the target part and triggered by the functional icon.
In the above solution, the third presenting module is further configured to receive and present second content related to the target portion, which is input based on the second content input box;
hiding the second content in response to an input completion instruction, and displaying the hidden second content when a click operation for the function icon is received.
In the above scheme, the third presenting module is further configured to present the collaborative editing page including a quit function key;
wherein the exit function key indicates to exit a function entry for collaborative editing of the target picture.
In the above scheme, the apparatus further comprises:
a fourth presentation module, configured to present, through a session window of the group session or the collaborative editing page, at least one of the following operation states for the target picture:
the number of conversation members participating in editing the target picture;
the identification of the conversation member participating in editing the target picture;
the number of messages input by conversation members participating in editing the target picture aiming at the target picture;
and the identifier is used for prompting that the message corresponding to the target picture has update.
In the above scheme, the apparatus further comprises:
a first sending module, configured to respond to a notification instruction for a first target session member;
and sending a first notification message corresponding to the first target session member to notify the first target session member to pay attention to the collaborative editing page.
In the above scheme, the apparatus further comprises:
a fifth presentation module, configured to present a member selection interface including session members of the group session in response to a permission setting instruction for the session members of the group session;
and responding to a member selection instruction triggered through the member selection interface, and determining that the selected conversation member is a second target conversation member, wherein the second target conversation member has the operation authority of the target picture.
In the above scheme, the apparatus further comprises:
and the second sending module is used for sending a second notification message to the conversation member with the operation authority of the target picture so as to notify the conversation member with the operation authority to carry out collaborative editing on the target picture.
In the above scheme, the apparatus further comprises:
the function closing module is used for responding to the collaborative editing closing instruction aiming at the target picture, and acquiring the collaborative editing closing authority of a third target session member corresponding to the collaborative editing closing instruction;
and when the third target session member is determined to have the collaborative editing closing right, closing the collaborative editing function aiming at the target picture.
An embodiment of the present invention further provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the picture processing method provided by the embodiment of the invention when the executable instruction stored in the memory is executed.
The embodiment of the invention also provides a storage medium, which stores executable instructions, and when the executable instructions are executed by the processor, the image processing method provided by the embodiment of the invention is realized.
The embodiment of the invention has the following beneficial effects:
by applying the embodiment of the invention, when the terminal presents the session message containing the target picture, the operation function key corresponding to the target picture is presented, and the operation function key indicates the function entrance for performing collaborative editing on the target picture; when the clicking operation aiming at the operation function key is received, presenting a collaborative editing page corresponding to the target picture so that at least two conversation members can carry out collaborative editing on the target picture and present an edited target picture obtained synchronously; therefore, the session members can enter the collaborative editing page capable of editing the target picture based on the functional entry provided by the operation function key for collaboratively editing the target picture, so that a plurality of session members can collaboratively edit the target picture based on the collaborative editing page without presenting own editing will in a group session message manner, thereby avoiding the situation that a plurality of people edit the picture and send the message aiming at the picture in a session window to cause message confusion, and further improving the processing efficiency of the picture in the group session.
Drawings
Fig. 1 is a diagram illustrating a typical scene of a picture-based multi-user cooperative work provided in the related art;
FIG. 2 is a block diagram of a graphics processing system according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of a picture processing method according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a session message including a target picture according to an embodiment of the present invention;
FIG. 6 is a first diagram illustrating a function key of a display operation according to an embodiment of the present invention;
FIG. 7 is a second schematic diagram of a function key for display operation according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a collaborative editing page provided by an embodiment of the present invention;
fig. 9 is a first schematic view illustrating a picture processing flow based on a collaborative editing page according to an embodiment of the present invention;
fig. 10 is a schematic diagram illustrating a second image processing flow based on a collaborative editing page according to an embodiment of the present invention;
FIG. 11 is a diagram of a second content entry box provided by an embodiment of the present invention;
FIG. 12 is a diagram illustrating an operation state of presenting a target picture through a session window according to an embodiment of the present invention;
FIG. 13 is a diagram illustrating an operation state of presenting a target picture through a collaborative editing page according to an embodiment of the present invention;
fig. 14 is a flowchart illustrating a picture processing method according to an embodiment of the present invention;
fig. 15 is a schematic flowchart of collaborative editing of a picture according to an embodiment of the present invention;
fig. 16 is a schematic structural diagram of a picture processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail with reference to the accompanying drawings, the described embodiments should not be construed as limiting the present invention, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, to enable embodiments of the invention described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used herein is for the purpose of describing embodiments of the invention only and is not intended to be limiting of the invention.
Before further detailed description of the embodiments of the present invention, terms and expressions mentioned in the embodiments of the present invention are explained, and the terms and expressions mentioned in the embodiments of the present invention are applied to the following explanations.
1) In response to the condition or state on which the performed operation depends, one or more of the performed operations may be in real-time or may have a set delay when the dependent condition or state is satisfied; there is no restriction on the order of execution of the operations performed unless otherwise specified.
2) And collaborative editing, namely an editing mode for users to edit online and collaborate by multiple users and store the edited content in the cloud in real time, synchronization of the edited content of the users is realized depending on communication between the client and the server, and each user participating in the collaborative editing can see the edited content of other users in real time.
When multiple persons discuss the same design scheme together, the communication is usually done online through instant messaging clients (such as QQ, wechat). A typical communication scenario is such that: the designer usually sends the content of the design scheme to the group session by means of screenshot, so that the user can publish comments, discuss the scheme and the like. Other personnel feed back personal opinions or evaluations to the group conversation by sending annotated design pictures or sending instant messages, thereby realizing the discussion of the design scheme. Specifically, referring to fig. 1, fig. 1 is a schematic diagram of a typical scene of multi-user collaborative office based on pictures provided in the related art, where a designer sends a screenshot of a day card to a group session, and other session members send parts that need improvement and optimization to the group session by means of screenshot again, such as the screenshot of "calendar" shown in fig. 1, and send an instant message "calendar in the upper right corner, the frame is somewhat detailed and is not clearly seen" to publish personal opinions. Therefore, when a plurality of people in group conversation publish different opinions at the same time, the problems of complicated information and easy missing of important information can be caused; and if a plurality of design schemes are discussed simultaneously, the situation of information confusion is easily caused, so that the working efficiency of multi-user cooperative office is greatly influenced.
Based on this, the image processing method, apparatus, system, electronic device and storage medium according to the embodiments of the present invention are proposed to solve at least the above problems in the related art, and are separately described below.
Based on the above explanations of terms and terms involved in the embodiments of the present invention, the following first explains the picture processing system provided by the embodiments of the present invention, referring to fig. 2, fig. 2 is a schematic diagram of an architecture of the picture processing system provided by the embodiments of the present invention, in order to support an exemplary application, a terminal 200 (including a terminal 200-1 and a terminal 200-2) is connected to a server 100 through a network 300, and the network 300 may be a wide area network or a local area network, or a combination of both, and uses a wireless or wired link to implement data transmission.
An instant messaging client is arranged on the terminal 200 (such as the terminal 200-1) and used for presenting a session message containing a target picture through a session window of a group session; presenting an operation function key corresponding to the target picture; responding to the click operation aiming at the operation function key, and presenting a collaborative editing page corresponding to the target picture;
the terminal 200 (such as the terminal 200-1) is configured to send edit data corresponding to a target picture to the server in response to an edit operation for the target picture triggered by the collaborative editing page;
the server 100 is used for receiving the editing data of the corresponding target picture sent by the terminal, synchronizing the editing data to the terminals of other conversation members and realizing multi-user collaborative editing based on the picture;
and the terminal 200 (such as the terminal 200-1) is used for presenting the edited target picture obtained by synchronization.
Therefore, the session members can enter the collaborative editing page capable of editing the target picture based on the functional entry provided by the operation function key for collaboratively editing the target picture, so that a plurality of session members can collaboratively edit the target picture based on the collaborative editing page without presenting own editing will in a group session message manner, thereby avoiding the situation that a plurality of people edit the picture and send the message aiming at the picture in a session window to cause message confusion, and further improving the processing efficiency of the picture in the group session.
In practical applications, the server 100 may be a server configured independently to support various services, or may be a server cluster; the terminal (e.g., terminal 200-1) may be any type of user terminal such as a smartphone, tablet, laptop, etc., and may also be a wearable computing device, a Personal Digital Assistant (PDA), a desktop computer, a cellular phone, a media player, a navigation device, a game console, a television, or a combination of any two or more of these or other data processing devices.
The following describes in detail a hardware structure of an electronic device for executing a picture processing method according to an embodiment of the present invention, with reference to fig. 3, where fig. 3 is a schematic structural diagram of the electronic device according to the embodiment of the present invention, and an electronic device 300 shown in fig. 3 includes: at least one processor 310, memory 350, at least one network interface 320, and a user interface 330. The various components in electronic device 300 are coupled together by a bus system 340. It will be appreciated that the bus system 340 is used to enable communications among the components connected. The bus system 340 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 340 in fig. 3.
The Processor 310 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 330 includes one or more output devices 331, including one or more speakers and/or one or more visual display screens, that enable presentation of media content. The user interface 330 also includes one or more input devices 332, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 350 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 350 optionally includes one or more storage devices physically located remote from processor 310.
The memory 350 may include either volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile memory may be a Read Only Memory (ROM), and the volatile memory may be a Random Access Memory (RAM). The memory 350 described in embodiments of the invention is intended to comprise any suitable type of memory.
In some embodiments, memory 350 is capable of storing data, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below, to support various operations.
An operating system 351 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 352 for communicating to other computing devices via one or more (wired or wireless) network interfaces 320, exemplary network interfaces 320 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 353 for enabling presentation of information (e.g., a user interface for operating peripherals and displaying content and information) via one or more output devices 331 (e.g., a display screen, speakers, etc.) associated with the user interface 330;
an input processing module 354 for detecting one or more user inputs or interactions from one of the one or more input devices 332 and translating the detected inputs or interactions.
In some embodiments, the image processing apparatus provided by the embodiments of the present invention can be implemented in software, and fig. 3 shows an image processing apparatus 355 stored in a memory 350, which can be software in the form of programs and plug-ins, and includes the following software modules: a first rendering module 3551, a second rendering module 3552, and a third rendering module 3553, which are logical and thus may be arbitrarily combined or further separated according to the functions implemented, and the functions of the respective modules will be described below.
In other embodiments, the image processing apparatus provided in the embodiments of the present invention may be implemented by a combination of hardware and software, and as an example, the image processing apparatus provided in the embodiments of the present invention may be a processor in the form of a hardware decoding processor, which is programmed to execute the image processing method provided in the embodiments of the present invention, for example, the processor in the form of the hardware decoding processor may employ one or more Application Specific Integrated Circuits (SIC), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
Based on the above description of the image processing system and the electronic device according to the embodiments of the present invention, the image processing method according to the embodiments of the present invention is described below. Referring to fig. 4, fig. 4 is a schematic flowchart of a picture processing method according to an embodiment of the present invention; in some embodiments, the image processing method may be implemented by a server or a terminal alone, or implemented by a server and a terminal in a cooperative manner, taking the terminal as an example, the image processing method provided in the embodiments of the present invention includes:
step 401: and the terminal presents the session message containing the target picture through a session window of the group session.
In practical application, an instant messaging client can be arranged on a terminal, and a session window for a user to carry out a session is presented by operating the instant messaging client; here, the session type of the session may be a group session or a single chat session. The user can perform instant messaging through the conversation window, such as sending text messages, picture messages or video messages.
When a user needs to share a target picture with other users or collect opinions of other users about the target picture through a group session, the user can send the target picture to the group session in an instant session message mode. And the terminal presents the session message containing the target picture to the session members of the group session through the session window of the group session. The conversation member can view the details of the target picture by clicking the conversation message.
Illustratively, referring to fig. 5, fig. 5 is a schematic diagram of presenting a session message including a target picture according to an embodiment of the present invention. Here, conversation member 1 of the group conversation sends a conversation message containing that the target picture is "daily sign card" to other conversation members; and the terminal presents the conversation message containing the daily sign card picture through a conversation window of the group conversation.
Step 402: and presenting the operation function keys corresponding to the target pictures.
Here, the operation function key indicates a function entry for performing collaborative editing of the target picture. The user can start the collaborative editing function for the target picture by clicking the operation function key.
In some embodiments, the terminal may present the operation function key corresponding to the target picture by: responding to click operation aiming at the conversation message containing the target picture; and presenting the target picture, and presenting the operation function key in the presented target picture.
Based on the session message containing the target picture presented by the terminal, each session member can check the details of the target picture by clicking the session message. After the terminal receives a click operation for a session message containing a target picture, responding to the click operation, and presenting the target picture, such as amplifying and displaying the target picture; in order to realize the collaborative editing of the target picture by each conversation member, the operation function key corresponding to the target picture is presented in the presented target picture at the same time, so that each conversation member can start the collaborative editing function aiming at the target picture by clicking the operation function key.
Exemplarily, referring to fig. 6, fig. 6 is a first schematic diagram of a presentation operation function key provided by an embodiment of the present invention. Here, the terminal presents the "day tag card" picture in response to a click operation on a session message including the "day tag card" picture, and presents an operation function key corresponding to the picture in the "day tag card" picture, the operation function key may be identified by a text, a function icon, or the like, and the presentation position may be set as needed, as shown in fig. 6, which is a square button for "enter collaborative editing" presented below the target picture.
In addition to the above operation function keys being able to be presented through the target picture, in other embodiments, the terminal is able to present the operation function key corresponding to the target picture through a session window of the group session.
In actual implementation, when the terminal operates the function key presented through the session window, the function key can be presented when the session message containing the target picture is detected, or can be presented all the time.
Exemplarily, referring to fig. 7, fig. 7 is a schematic diagram two of the function keys for presentation operation provided by the embodiment of the present invention. Here, when a conversation member enters a conversation window of the group conversation, the terminal presents an operation function key corresponding to the "day tag card" picture through the conversation window, the operation function key can be identified by means of characters, function icons and the like, and the presentation position can also be set as required, for example, as shown in fig. 7, a square button "enter collaborative editing" is presented at the upper right corner of the conversation window.
By applying the embodiment, the operation function key aiming at the target picture is set, and multi-person collaborative editing of the target picture is realized; when the scheme aiming at the target picture is discussed, the centralized summary of the information is realized through collaborative editing, and the office efficiency is improved.
Step 403: and responding to the click operation aiming at the operation function key, and presenting a collaborative editing page corresponding to the target picture.
Here, the collaborative editing page is used for at least two session members in the group session to edit the target picture, and is used for presenting the edited target picture obtained by synchronization.
And when the conversation member needs to carry out collaborative editing on the target picture, the presented operation function key can be clicked. And the terminal receives the click operation of the session member aiming at the operation function key, responds to the click operation and presents a collaborative editing page for editing the target picture.
In some embodiments, the terminal may present a collaborative editing page including annotation function keys; and the labeling function key indicates a function entrance for labeling the target part in the target picture.
In some embodiments, the terminal may also present a collaborative editing page including an exit function key; wherein the exit function key indicates to exit the function entry for the collaborative editing of the target picture.
Exemplarily, referring to fig. 8, fig. 8 is a schematic diagram of a collaborative editing page provided by an embodiment of the present invention. Here, the terminal responds to the click operation for the operation function key, and presents a collaborative editing page corresponding to the target picture, where the collaborative editing page is established based on the target picture, and with the target picture as a bottom layer, a labeling function key and a quit function key are respectively added, that is, two buttons of "frame-selecting labeling point" and "quit collaborative editing" shown in fig. 8 are used.
In actual implementation, after the terminal presents the collaborative editing page including the annotation function key, the conversation member can trigger the function entry for annotating the target part of the target picture by clicking the annotation function key, so that the purpose of publishing opinions or opinions aiming at the target part is achieved. And after the editing is finished, the session member can quit the collaborative editing function aiming at the target picture by clicking a quit function key.
By applying the embodiment, the marking function key and the quit function key are arranged on the collaborative editing page, so that the marking of the target part of the target picture is realized, and the collaborative editing page quits after the marking is finished.
In some embodiments, the terminal presents an area selection box for selecting an area corresponding to the target portion in response to a click operation for the annotation function key; receiving a region selection instruction aiming at a target part and triggered based on a region selection frame, and presenting a first content input frame corresponding to the target part; and the first content input box is used for inputting and displaying first content related to the target part so as to realize annotation aiming at the target part.
In practical application, the conversation member starts a labeling function aiming at the target picture by clicking a labeling function key. And after receiving the clicking operation aiming at the labeling function key, the terminal responds to the clicking operation and presents an area selection frame for selecting the area corresponding to the target part. The conversation member can select the area of the target part needing to be marked by dragging the area selection box. Here, the size of the area selection box may be adjusted by a finger drag to make the selected target area more accurate.
The terminal receives a region selection instruction of the target part triggered based on the region selection frame, and automatically pops up a first content input frame corresponding to the target part, wherein the first content input frame is used for the conversation members to input and present first content related to the target part, such as opinions or evaluations aiming at the target part, and the like, so as to realize labeling of the target part.
Here, the first content input box may be presented in a certain characteristic region of the collaborative editing page, such as a center position, an upper left corner, a lower right corner, and the like of the collaborative editing page; the location of the presentation may also be determined based on the selected target portion, such as to be presented to the left, below, etc. the target portion; specifically, in order to ensure that the conversation member has enough content input space, when determining the position of the presentation based on the selected target portion, the position may be determined based on the area of the collaborative editing page where the target portion is located, for example, if the target portion is located in the upper right corner of the collaborative editing page, the first content input box can only be presented below or to the left of the target portion.
Exemplarily, referring to fig. 9, fig. 9 is a first schematic diagram illustrating a picture processing flow based on a collaborative editing page according to an embodiment of the present invention. Here, the terminal receives a click operation of the conversation member on a labeling function key of "frame selection of a labeling point", and presents an area selection frame, and the position and the size of the area selection frame can be changed through manual dragging. In response to the area selection instruction triggered by the area selection box, the area selection box is presented in the area corresponding to the target portion, such as the "calendar" portion shown in fig. 9, and accordingly the first content input box "please input your opinion" corresponding to the target portion is popped up for inputting the content or opinion related to the "calendar" portion.
By applying the embodiment, the target part of the target picture is selected through the area selection frame, and after the target part is selected, the first content input frame corresponding to the target part is presented for inputting the content (opinion or evaluation) related to the target part, so that the annotation aiming at the target part is realized.
In some embodiments, when it is determined that the editing data (for example, the annotated content) corresponding to the target picture exists after entering the collaborative editing page, if the session member still needs to annotate the annotated part or input more related content, the annotation function may be triggered by clicking an annotation function key. At the moment, the terminal responds to the click operation aiming at the labeling function key and presents a function icon for identifying that the target part has a corresponding label; and presenting a second content input box of the associated functional icon in response to the marking instruction for the target part triggered based on the functional icon.
In practical application, after receiving a click operation for a labeling function key, the terminal responds to the click operation and presents a function icon for identifying that a corresponding label exists in a target part. The form of the function icon can be set according to needs, and the labeled message or the number of the comments can be displayed in a related manner when the function icon is displayed. The conversation member can click the presented function icon to trigger a labeling instruction aiming at the target part. And the terminal responds to the marking instruction and pops up a second content input box associated with the function icon so that the conversation member can continue to input related second content such as opinion or evaluation and the like aiming at the target part.
Exemplarily, referring to fig. 10, fig. 10 is a schematic diagram of a picture processing flow based on a collaborative editing page according to an embodiment of the present invention. Here, the terminal receives a click operation of the session member for the "frame selection annotation point" annotation function key, and in response to the click operation, presents a function icon for identifying that a corresponding annotation exists in the target portion, such as a frame of the "calendar" portion, a frame of the "share button" portion, and the like shown in fig. 10. In response to the marking instruction triggered by clicking the function icon, presenting a second content input box associated with the function icon, wherein the second content input box can contain the content which is marked before, namely' conversation member 1: the frame of the calendar is somewhat thin and not very clear. The session members may continue to label the "calendar" section by clicking on the second content entry box.
In some embodiments, the terminal receives and presents second content related to the target portion, which is input based on the second content input box; and hiding the presented second content in response to the input of the completion instruction, and displaying the hidden second content when a clicking operation for the function icon is received.
Here, the second content input box for the target portion may be hidden or displayed according to a click operation of a user, so that when the content input for the target image is large, the input content may be reasonably presented, and user experience is improved.
Exemplarily, referring to fig. 11, fig. 11 is a schematic diagram of a second content input box provided by the embodiment of the present invention. Here, after the session member inputs the second content for the "calendar" section, it may click the "done" button to end this annotation; the terminal responds to the input completion instruction and hides the second content; when the second content needs to be presented again, the conversation member can click the function icon corresponding to the calendar part; the terminal presents the hidden second content corresponding to the "calendar" portion in response to the clicking operation.
By applying the embodiment, when the target part has the corresponding annotation, the annotation is identified through the functional icon to remind the conversation member to check the annotation aiming at the target part, and the content needing to be annotated can be input through the second content input box; and the second content input box can be hidden or displayed according to the clicking operation of the conversation members, so that the input content can be reasonably presented when the input content is more.
In some embodiments, the terminal may further present at least one of the following operating states for the target picture through a session window or a collaborative editing page of the group session: the number of conversation members participating in editing the target picture; the identification of the conversation member participating in editing the target picture; the number of messages input by conversation members participating in editing the target picture aiming at the target picture; and the identifier is used for prompting that the message corresponding to the target picture has update.
In practical application, the terminal can present the operation state of the target picture through a session window or a collaborative editing page of the group session, so that session members can know the collaborative editing state of the target picture at any time. Specifically, the number, identification, and the like of the session members participating in editing the target picture may be presented, where the identification of the session members may be represented by the avatar, user name, and the like of the session members.
Further, the terminal can also present the number of messages input by the conversation members aiming at the target picture and present an identifier prompting that the message corresponding to the target picture has update. Here, the input message is related content labeled for the target picture, such as opinion or rating, and may be specifically presented in a manner of "input member ID — input content". When the message corresponding to the target picture is updated, the notification text for updating the message can be presented in the conversation window, or the notification text can be presented in the corresponding position of the collaborative editing page through a function icon in a preset form, such as a dot and corner mark.
Exemplarily, referring to fig. 12, fig. 12 is a schematic diagram of an operation state of presenting a target picture through a session window according to an embodiment of the present invention. Here, as shown in fig. 12, the terminal presents the operation state for the target picture through the conversation window of the group conversation, including the number of conversation members participating in editing the target picture, the part identifier, the number of messages input for the target picture, and the text identifier "there is a new message, please view" prompting that the input message has an update; while also presenting the identities of the members of the conversation that have viewed the updated message.
Exemplarily, referring to fig. 13, fig. 13 is a schematic diagram illustrating an operation state of presenting a target picture through a collaborative editing page according to an embodiment of the present invention. Here, as shown in fig. 13, the terminal presents, through the collaborative editing page, an operation state for the target picture, including the number of session members participating in editing the target picture, a part identifier, the number of messages input for the target picture, and an identifier indicating that there is an update in the input messages; here, the mark may be a dot corner mark, or a numerical corner mark.
In some embodiments, the terminal may notify the conversation member in such a way that the conversation member is timely interested in the editing state for the target picture: responding to a notification instruction for the first target session member; and sending a first notification message corresponding to the first target session member to notify the first target session member to pay attention to the collaborative editing page.
In practical applications, the function of the @ session member is also supported during the process of collaborative editing. When the conversation member edits the target picture, if the edited content needs to be paid attention by other conversation members in time, a notification instruction aiming at the first target conversation member can be triggered by setting the @ function of the first target conversation member so as to notify the first target conversation member to pay attention to the collaborative editing page in time.
The terminal receives a notification instruction aiming at the first target conversation member, and sends a first notification message to the first target conversation member in response to the notification instruction so as to notify the first target conversation member to pay attention to the collaborative editing page of the target picture in time.
By applying the above embodiment, by setting the @ first target session member function, when the first target session member needs to be notified to pay attention to the collaborative editing page in time, the first notification message is sent to the first target session member, thereby avoiding the occurrence of a situation of missing important information.
In some embodiments, the terminal may set the second target session member with the target picture operation authority by: responding to the permission setting instruction aiming at the conversation members of the group conversation, and presenting a member selection interface containing each conversation member of the group conversation; and responding to a member selection instruction triggered through the member selection interface, and determining the selected conversation member as a second target conversation member.
In practical application, a second target conversation member with a corresponding target picture operation authority can be set for the target picture. Here, after the target picture sender (or an authority setter capable of setting the operation authority of the target picture) selects the target picture to be sent, the terminal may correspondingly pop up an interface whether to set the operation authority of the target picture, so that the authority setter determines whether to trigger the authority setting instruction for the session members of the group session.
When the terminal receives an authority setting instruction aiming at the conversation members of the group conversation, responding to the authority setting instruction, presenting a member selection interface of each conversation member of the group conversation, and simultaneously presenting a selection function button corresponding to each conversation member in the member selection interface; the authority setter can trigger a member selection instruction by clicking a selection function button corresponding to each conversation member; and the terminal responds to the member selection instruction and determines the selected conversation member as a second target conversation member with the target picture operation authority.
By applying the embodiment, the conversation member with the operation authority of the target picture is set, so that only the conversation member with the operation authority is allowed to carry out collaborative editing on the target picture, and the problem of information confusion caused by mistaken editing of irrelevant personnel is avoided.
In some embodiments, the terminal may notify the conversation member in such a way that the conversation member edits the target picture in time: and sending a second notification message to the conversation member with the operation authority of the target picture to notify the conversation member with the operation authority to carry out collaborative editing on the target picture.
In practical application, the terminal can also send a second notification message to the conversation member with the operation authority of the target picture so as to notify the conversation member with the operation authority to perform collaborative editing on the target picture in time. Specifically, the terminal may send the second notification message to all session members having the operation right while presenting the session message, that is, while presenting the target picture for the first time; in the process of collaborative editing of the target picture, when a certain conversation member with operation authority is found not to carry out collaborative editing on the target picture, a second notification message is sent to the conversation member to prompt the conversation member to finish the collaborative editing on the target picture as soon as possible.
By applying the embodiment, the second notification message is sent to the conversation member with the target picture operation authority, so that the conversation member is notified to carry out collaborative editing on the target picture in time, and the office efficiency is improved.
In some embodiments, the terminal may turn off the collaborative editing function for the target picture by: responding to the collaborative editing closing instruction aiming at the target picture, and acquiring the collaborative editing closing authority of a third target session member corresponding to the collaborative editing closing instruction; and when the third target session member is determined to have the collaborative editing closing right, closing the collaborative editing function aiming at the target picture.
In practical application, after the collaborative editing of the group session member for the target picture is finished, the collaborative editing function for the target picture can be closed, so as to avoid unnecessary discussion and reminding. Specifically, the session member having the collaborative editing close authority may be set in advance. When the terminal receives a collaborative editing close instruction for a target picture, whether a third target session member triggering the collaborative editing close instruction has collaborative editing close authority is determined. And when determining that the third target session member has the collaborative editing closing right, closing the collaborative editing function for the target picture in response to the collaborative editing closing instruction for the target picture. And when determining that the third target session member does not have the collaborative editing closing right, not allowing to close the collaborative editing function.
By applying the embodiment of the invention, when the terminal presents the session message containing the target picture, the operation function key corresponding to the target picture is presented, and the operation function key indicates the function entrance for performing collaborative editing on the target picture; when the clicking operation aiming at the operation function key is received, presenting a collaborative editing page corresponding to the target picture so that at least two conversation members can carry out collaborative editing on the target picture and present an edited target picture obtained synchronously; therefore, the session members can enter the collaborative editing page capable of editing the target picture based on the functional entry provided by the operation function key for collaboratively editing the target picture, so that a plurality of session members can collaboratively edit the target picture based on the collaborative editing page without presenting own editing will in a group session message manner, thereby avoiding the situation that a plurality of people edit the picture and send the message aiming at the picture in a session window to cause message confusion, and further improving the processing efficiency of the picture in the group session.
An exemplary application of the embodiments of the present invention in a practical application scenario will be described below.
When multiple persons discuss the same design scheme together, the communication is usually done online through instant messaging clients (such as QQ, wechat). A typical communication scenario is such that: the designer usually sends the content of the design scheme to the group session by means of screenshot, so that the user can publish comments, discuss the scheme and the like. Other personnel feed back personal opinions or evaluations to the group conversation by sending annotated design pictures or sending instant messages, thereby realizing the discussion of the design scheme. Specifically, referring to fig. 1, fig. 1 is a schematic diagram of a typical scene of multi-user collaborative office based on pictures provided in the related art, where a designer sends a screenshot of a day card to a group session, and other session members send parts that need improvement and optimization to the group session by means of screenshot again, such as the screenshot of "calendar" shown in fig. 1, and send an instant message "calendar in the upper right corner, the frame is somewhat detailed and is not clearly seen" to publish personal opinions. Therefore, when a plurality of people in group conversation publish different opinions at the same time, the problems of complicated information and easy missing of important information can be caused; and if a plurality of design schemes are discussed simultaneously, the situation of information confusion is easily caused, so that the working efficiency of multi-user cooperative office is greatly influenced.
Based on this, an embodiment of the present invention provides a picture processing method to solve the above problems, and the following describes the picture processing method in detail. Taking the example that the terminal runs the instant messaging client to realize the collaborative editing of the target picture, the picture processing method provided by the embodiment of the invention is continuously explained. Referring to fig. 14, fig. 14 is a schematic flowchart of a picture processing method according to an embodiment of the present invention, where the picture processing method according to the embodiment of the present invention includes:
step 1401: and the terminal presents the session message containing the target picture through a session window of the group session.
Here, the terminal is provided with an instant messaging client, and by operating the instant messaging client, a session message including a target picture is presented in a session window of a group session. And the sender or the receiver of the target picture views the target picture through the conversation window.
Step 1402: and presenting the target picture in response to the click operation of the conversation message containing the target picture, and presenting an operation function key in the target picture.
Here, the operation function key indicates a function entry for performing collaborative editing of the target picture.
In some embodiments, the operation function key may also be presented through a session window of the group session.
Step 1403: and responding to the click operation aiming at the operation function key, and presenting a collaborative editing page corresponding to the target picture.
Here, the collaborative editing page is used for at least two session members in the group session to edit the target picture, and is used for presenting the edited target picture obtained by synchronization.
Step 1404: and presenting a function icon for identifying that the target part has a label in response to the click operation of the label function key of the collaborative editing page.
Here, the target picture already has corresponding collaborative editing data, and after receiving the collaborative editing data returned by the server, the terminal presents the collaborative editing data through a function icon for identifying that a label exists in the target part.
Step 1405: and presenting a content input box associated with the function icon in response to the marking instruction triggered based on the function icon.
Here, if the user wants to perform further labeling with respect to the labeled target portion, the terminal may be caused to present a content input box associated with the function icon by clicking the function icon. Wherein, the content related to the target part which is input before exists in the content input box, and the user can continue to input the content related to the target part based on the content input box.
Step 1406: and responding to the clicking operation of the marking function key of the collaborative editing page, and presenting an area selection box for selecting the area corresponding to the target part.
Step 1407: and receiving a region selection instruction triggered based on the region selection frame, and presenting the content input frame corresponding to the target part.
Here, the content input box is used for the user to input and display content related to the target portion, such as the user's opinion or rating, and the like.
Step 1408: and sending the collaborative editing data obtained by editing the target picture based on the collaborative editing page by the user to the server.
Here, the collaborative editing data may be data such as a frame selection area for the target portion and a viewpoint and opinion input by the user for the target portion.
Step 1409: and the server stores the collaborative editing data and synchronizes to other terminals participating in collaborative editing.
Step 1410: and the terminal presents the edited target picture obtained synchronously through the collaborative editing page.
Here, the user terminals participating in the collaborative editing present collaborative editing data for the target picture through the collaborative editing page, that is, the edited target picture obtained synchronously. For example, a marked target portion selected by the area selection box is displayed, and when the user clicks the area selection box of the target portion, details of user comment information or opinion input by the user for the target portion are presented.
In practical application, referring to fig. 15, fig. 15 is a schematic flowchart of collaborative editing of a picture according to an embodiment of the present invention. When the terminal presents the session message containing the target picture through the session window of the group session, the user can click the session message containing the target picture in the session window to enable the terminal to present the target picture containing the operation function key, and therefore the user can conveniently enter a collaborative editing mode aiming at the target picture by clicking the operation function key. At this time, the terminal responds to the click operation of the user on the operation function key, enters the collaborative editing mode of the target picture, and sends an acquisition request of collaborative editing data for the target picture to the server.
And the server responds to the acquisition request and judges whether the collaborative editing data corresponding to the target picture exists or not.
If the target picture exists, reading the collaborative editing data of the target picture, and returning the collaborative editing data to the terminal; the terminal presents a function icon for identifying that corresponding collaborative editing data exists in a target part of a target picture, and presents an input box associated with the function icon and content (such as viewpoint details of user comments) in the input box in response to a click operation on the function icon. Or receives and presents the viewpoint or opinion, which is input again by the user, based on the input box.
And if the target picture does not exist, the terminal is informed to create the collaborative editing data of the target picture.
The terminal presents a collaborative editing page for a user to carry out collaborative editing on the target picture, the collaborative editing page comprises a function entrance for indicating to mark a target part in the target picture, and the user can trigger a marking function aiming at the target picture by clicking a marking function key. The terminal responds to the click operation aiming at the marking function key and presents an area selection frame for selecting the area of the target part; receiving a region selection instruction of a target part triggered based on the region selection frame, selecting the target part selected by a user through the region selection frame, and presenting an input frame corresponding to the target part to enable the user to input viewpoints and opinions aiming at the target part, so as to realize the annotation aiming at the target picture.
After receiving an editing operation performed by a user on a target picture, the terminal uploads collaborative editing data (such as data of a frame selection position of a target portion, a viewpoint input by the user corresponding to the target portion, and the like) to the server, and the server stores the collaborative editing data and synchronizes the collaborative editing data to other user terminals participating in collaborative editing.
And the user terminal participating in the collaborative editing presents collaborative editing data aiming at the target picture through the collaborative editing page, for example, presents a labeled target part selected through the area selection box, and presents user comment information or view details input by the user aiming at the target part when the user clicks the area selection box of the target part.
Continuing with the description of the image processing apparatus 355 provided by the embodiments of the present invention, in some embodiments, the image processing apparatus can be implemented as a software module. Referring to fig. 16, fig. 16 is a schematic structural diagram of a picture processing apparatus 355 according to an embodiment of the present invention, where the picture processing apparatus 355 includes:
a first rendering module 3551, configured to render a conversation message including a target picture through a conversation window of a group conversation;
a second presenting module 3552, configured to present an operation function key corresponding to the target picture, where the operation function key indicates a function entry for performing collaborative editing on the target picture;
a third presenting module 3553, configured to present, in response to a click operation on the operation function key, a collaborative editing page corresponding to the target picture;
and the collaborative editing page is used for at least two session members in the group session to edit the target picture and for presenting the edited target picture obtained synchronously.
In some embodiments, the second presentation module 3552 is further configured to respond to a click operation on the conversation message containing the target picture;
presenting the target picture, and presenting the operation function key in the presented target picture.
In some embodiments, the second presenting module 3552 is further configured to present an operation function key corresponding to the target picture through a session window of the group session.
In some embodiments, the third presenting module 3553 is further configured to present the collaborative editing page including annotation function keys;
wherein the labeling function key indicates a function entry for labeling a target part in the target picture.
In some embodiments, the third presenting module 3553 is further configured to present, in response to a clicking operation on the annotation function key, an area selection box for selecting an area corresponding to the target portion;
receiving a region selection instruction aiming at the target part and triggered based on the region selection box, and presenting a first content input box corresponding to the target part;
the first content input box is used for inputting and displaying first content related to the target part so as to realize annotation aiming at the target part.
In some embodiments, the third presenting module 3553 is further configured to present a function icon for identifying that a corresponding label exists in the target portion in response to a click operation on the label function key;
and presenting a second content input box associated with the functional icon in response to the marking instruction aiming at the target part and triggered by the functional icon.
In some embodiments, the third presenting module 3553 is further configured to receive and present second content related to the target portion input based on the second content input box;
hiding the second content in response to an input completion instruction, and displaying the hidden second content when a click operation for the function icon is received.
In some embodiments, the third presenting module 3553 is further configured to present the collaborative editing page including an exit function key;
wherein the exit function key indicates to exit a function entry for collaborative editing of the target picture.
In some embodiments, the apparatus further comprises:
a fourth presentation module, configured to present, through a session window of the group session or the collaborative editing page, at least one of the following operation states for the target picture:
the number of conversation members participating in editing the target picture;
the identification of the conversation member participating in editing the target picture;
the number of messages input by conversation members participating in editing the target picture aiming at the target picture;
and the identifier is used for prompting that the message corresponding to the target picture has update.
In some embodiments, the apparatus further comprises:
a first sending module, configured to respond to a notification instruction for a first target session member;
and sending a first notification message corresponding to the first target session member to notify the first target session member to pay attention to the collaborative editing page.
In some embodiments, the apparatus further comprises:
a fifth presentation module, configured to present a member selection interface including session members of the group session in response to a permission setting instruction for the session members of the group session;
and responding to a member selection instruction triggered through the member selection interface, and determining that the selected conversation member is a second target conversation member, wherein the second target conversation member has the operation authority of the target picture.
In some embodiments, the apparatus further comprises:
and the second sending module is used for sending a second notification message to the conversation member with the operation authority of the target picture so as to notify the conversation member with the operation authority to carry out collaborative editing on the target picture.
In some embodiments, the apparatus further comprises:
the function closing module is used for responding to the collaborative editing closing instruction aiming at the target picture, and acquiring the collaborative editing closing authority of a third target session member corresponding to the collaborative editing closing instruction;
and when the third target session member is determined to have the collaborative editing closing right, closing the collaborative editing function aiming at the target picture.
An embodiment of the present invention further provides an electronic device, where the electronic device includes:
a memory for storing executable instructions;
and the processor is used for realizing the picture processing method provided by the embodiment of the invention when the executable instruction stored in the memory is executed.
The embodiment of the invention also provides a storage medium, which stores executable instructions, and when the executable instructions are executed by the processor, the image processing method provided by the embodiment of the invention is realized.
In some embodiments, the storage medium may be a memory such as FRAM, ROM, PROM, EPROM, EE PROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, may be stored in a portion of a file that holds other programs or data, e.g., in one or more scripts in a HyperText markup Language (H TML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
The above description is only an example of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present invention are included in the protection scope of the present invention.

Claims (15)

1. A picture processing method, characterized in that the method comprises:
presenting a session message containing a target picture through a session window of a group session;
presenting an operation function key corresponding to the target picture, wherein the operation function key indicates a function entrance for performing collaborative editing on the target picture;
responding to the click operation aiming at the operation function key, and presenting a collaborative editing page corresponding to the target picture;
and the collaborative editing page is used for at least two session members in the group session to edit the target picture and for presenting the edited target picture obtained synchronously.
2. The method of claim 1, wherein the presenting the operation function key corresponding to the target picture comprises:
responding to click operation aiming at the conversation message containing the target picture;
presenting the target picture, and presenting the operation function key in the presented target picture.
3. The method of claim 1, wherein the presenting the operation function key corresponding to the target picture comprises:
and presenting an operation function key corresponding to the target picture through a session window of the group session.
4. The method of claim 1, wherein the presenting the collaborative editing page corresponding to the target picture comprises:
presenting the collaborative editing page containing the annotation function key;
wherein the labeling function key indicates a function entry for labeling a target part in the target picture.
5. The method of claim 4, wherein the method further comprises:
responding to the click operation aiming at the marking function key, and presenting an area selection frame for selecting the area corresponding to the target part;
receiving a region selection instruction aiming at the target part and triggered based on the region selection box, and presenting a first content input box corresponding to the target part;
the first content input box is used for inputting and displaying first content related to the target part so as to realize annotation aiming at the target part.
6. The method of claim 4, wherein the method further comprises:
presenting a function icon for identifying that the target part has a corresponding label in response to the click operation of the label function key;
and presenting a second content input box associated with the functional icon in response to the marking instruction aiming at the target part and triggered by the functional icon.
7. The method of claim 6, wherein the method further comprises:
receiving and presenting second content related to the target portion, which is input based on the second content input box;
hiding the second content in response to an input completion instruction, and displaying the hidden second content when a click operation for the function icon is received.
8. The method of claim 1, wherein the presenting the collaborative editing page corresponding to the target picture comprises:
presenting the collaborative editing page comprising an exit function key;
wherein the exit function key indicates to exit a function entry for collaborative editing of the target picture.
9. The method of claim 1, wherein the method further comprises:
presenting, through a session window of the group session or the collaborative editing page, at least one of the following operating states for the target picture:
the number of conversation members participating in editing the target picture;
the identification of the conversation member participating in editing the target picture;
the number of messages input by conversation members participating in editing the target picture aiming at the target picture;
and the identifier is used for prompting that the message corresponding to the target picture has update.
10. The method of claim 1, wherein the method further comprises:
responding to a notification instruction for the first target session member;
and sending a first notification message corresponding to the first target session member to notify the first target session member to pay attention to the collaborative editing page.
11. The method of claim 1, wherein the method further comprises:
responding to the permission setting instruction aiming at the conversation members of the group conversation, and presenting a member selection interface containing each conversation member of the group conversation;
and responding to a member selection instruction triggered through the member selection interface, and determining that the selected conversation member is a second target conversation member, wherein the second target conversation member has the operation authority of the target picture.
12. The method of claim 1, wherein the method further comprises:
and sending a second notification message to the conversation member with the operation authority of the target picture so as to notify the conversation member with the operation authority to carry out collaborative editing on the target picture.
13. The method of claim 1, wherein the method further comprises:
responding to the collaborative editing closing instruction aiming at the target picture, and acquiring the collaborative editing closing authority of a third target session member corresponding to the collaborative editing closing instruction;
and when the third target session member is determined to have the collaborative editing closing right, closing the collaborative editing function aiming at the target picture.
14. A picture processing apparatus, characterized in that the apparatus comprises:
the first presentation module is used for presenting the session message containing the target picture through a session window of the group session;
the second presentation module is used for presenting operation function keys corresponding to the target picture, and the operation function keys indicate function entries for performing collaborative editing on the target picture;
the third presentation module is used for responding to the click operation aiming at the operation function key and presenting a collaborative editing page corresponding to the target picture;
and the collaborative editing page is used for at least two session members in the group session to edit the target picture and for presenting the edited target picture obtained synchronously.
15. A storage medium storing executable instructions which, when executed by a processor, implement a picture processing method as claimed in any one of claims 1 to 13.
CN202010098716.5A 2020-02-18 2020-02-18 Picture processing method and device and storage medium Pending CN111309211A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010098716.5A CN111309211A (en) 2020-02-18 2020-02-18 Picture processing method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010098716.5A CN111309211A (en) 2020-02-18 2020-02-18 Picture processing method and device and storage medium

Publications (1)

Publication Number Publication Date
CN111309211A true CN111309211A (en) 2020-06-19

Family

ID=71149103

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010098716.5A Pending CN111309211A (en) 2020-02-18 2020-02-18 Picture processing method and device and storage medium

Country Status (1)

Country Link
CN (1) CN111309211A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113672142A (en) * 2021-08-24 2021-11-19 维沃移动通信(杭州)有限公司 Opinion display method and opinion display device
CN113971392A (en) * 2020-07-23 2022-01-25 腾讯科技(深圳)有限公司 Document editing method, device, equipment and medium
CN114089894A (en) * 2020-07-30 2022-02-25 腾讯科技(深圳)有限公司 Picture editing method and equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107040457A (en) * 2017-06-20 2017-08-11 北京奇艺世纪科技有限公司 A kind of instant communicating method and device
CN109918345A (en) * 2019-02-22 2019-06-21 腾讯科技(深圳)有限公司 Document processing method, device, terminal and storage medium
CN110377574A (en) * 2019-07-15 2019-10-25 腾讯科技(深圳)有限公司 Collaboration processing method and device, storage medium, the electronic device of picture

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107040457A (en) * 2017-06-20 2017-08-11 北京奇艺世纪科技有限公司 A kind of instant communicating method and device
CN109918345A (en) * 2019-02-22 2019-06-21 腾讯科技(深圳)有限公司 Document processing method, device, terminal and storage medium
CN110377574A (en) * 2019-07-15 2019-10-25 腾讯科技(深圳)有限公司 Collaboration processing method and device, storage medium, the electronic device of picture

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
鹿园: "「工具」八分钟了解来自未来的 UI 设计工具:InVison Studio", pages 1 - 5, Retrieved from the Internet <URL:https://zhuanlan.zhihu.com/p/30284789?from_voters_page=true> *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113971392A (en) * 2020-07-23 2022-01-25 腾讯科技(深圳)有限公司 Document editing method, device, equipment and medium
CN114089894A (en) * 2020-07-30 2022-02-25 腾讯科技(深圳)有限公司 Picture editing method and equipment
CN114089894B (en) * 2020-07-30 2023-10-13 腾讯科技(深圳)有限公司 Picture editing method and device
CN113672142A (en) * 2021-08-24 2021-11-19 维沃移动通信(杭州)有限公司 Opinion display method and opinion display device

Similar Documents

Publication Publication Date Title
JP6961994B2 (en) Systems and methods for message management and document generation on devices, message management programs, mobile devices
US10721279B2 (en) Managing messages between users for collaborative editing of electronic documents
JP7477028B2 (en) PROGRAM, METHOD, AND DEVICE FOR ON-DEVICE MESSAGE MANAGEMENT AND DOCUMENT GENERATION - Patent application
CN109416704B (en) Network-based embeddable collaborative workspace
US7698660B2 (en) Shared space for communicating information
US8635293B2 (en) Asynchronous video threads
US20140280603A1 (en) User attention and activity in chat systems
KR20200037241A (en) Method and system for displaying virtual meeting participant responses
CN107430622A (en) For the system and method for the change for informing the user the file to being stored in the document storage system based on cloud
JP2004164599A (en) System and method for shared and integrated online social interaction
CN111343074B (en) Video processing method, device and equipment and storage medium
CN111309211A (en) Picture processing method and device and storage medium
US11875081B2 (en) Shared screen tools for collaboration
US20220417343A1 (en) Image processing method, apparatus, device, and computer-readable storage medium
CN112783592A (en) Information issuing method, device, equipment and storage medium
CN115134104B (en) Information processing method, information display method and information display device
CN111934985B (en) Media content sharing method, device, equipment and computer readable storage medium
WO2023220303A1 (en) Contextual workflow buttons
US20220237365A1 (en) Method and system for providing mini-map in chatroom
US11902228B1 (en) Interactive user status
WO2024083153A1 (en) Interaction method and apparatus, and electronic device and storage medium
JP2024025318A (en) Information processing apparatus, information processing system, task management method, and program
CN114968025A (en) Comment information publishing method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40024237

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination