CN110580730B - Picture processing method and device - Google Patents

Picture processing method and device Download PDF

Info

Publication number
CN110580730B
CN110580730B CN201810595824.6A CN201810595824A CN110580730B CN 110580730 B CN110580730 B CN 110580730B CN 201810595824 A CN201810595824 A CN 201810595824A CN 110580730 B CN110580730 B CN 110580730B
Authority
CN
China
Prior art keywords
picture
color coding
descriptive information
information text
target processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810595824.6A
Other languages
Chinese (zh)
Other versions
CN110580730A (en
Inventor
穆艳学
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN201810595824.6A priority Critical patent/CN110580730B/en
Publication of CN110580730A publication Critical patent/CN110580730A/en
Application granted granted Critical
Publication of CN110580730B publication Critical patent/CN110580730B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a picture processing method and a picture processing device, which are applied to the field of image processing, wherein the method comprises the following steps: acquiring a first descriptive information text aiming at a target processing picture; and carrying out synthesis processing on the first descriptive information text and the target processing picture to obtain a synthesized picture carrying the first descriptive information text, wherein the first descriptive information text carried by the synthesized picture does not shade a main image area of the target processing picture so as to solve the technical problem that the display effect of the picture can be damaged when the descriptive information is added.

Description

Picture processing method and device
Technical Field
The present invention relates to the field of image processing, and in particular, to a method and apparatus for processing a picture.
Background
With the rapid development of storage devices, various pictures occupy more and more space of the storage devices. The massive pictures cause difficulty in memorizing important information of the pictures, such as: the time, place, and related information of the person being photographed, etc. Thus, while obtaining a picture, the user typically adds some descriptive information to the picture, such as: the time and place of picture shooting and the related data of the shot characters, etc. are convenient for subsequent use.
The existing method for adding descriptive information to the picture needs to display an edit box component on the picture according to a user instruction, and descriptive text is directly written in the edit box component, so that the descriptive text is directly displayed on the upper layer of the picture, and further, the picture-text display effect is achieved.
Obviously, in the prior art, text can be directly displayed on a picture, and a user needs to specify a position, otherwise, an image is blocked, so that the display effect of the picture is damaged.
Disclosure of Invention
The embodiment of the invention provides a picture processing method and device to solve the technical problem that the display effect of a picture can be damaged by adding descriptive information.
In a first aspect, an embodiment of the present invention provides a method for processing a picture, including:
acquiring a first descriptive information text aiming at a target processing picture;
and carrying out synthesis processing on the first descriptive information text and the target processing picture to obtain a synthesized picture carrying the first descriptive information text, wherein the first descriptive information text carried by the synthesized picture does not shade a main image area of the target processing picture.
Optionally, the first descriptive information text does not cover the main image area of the target processing picture, specifically:
The first descriptive information text is positioned outside the main image area of the target processing picture, or
The first descriptive information text is implicit in a main image area of the target processing picture.
Optionally, the synthesizing the first descriptive information text and the target processing picture to obtain a synthesized picture carrying the first descriptive information text, where the first descriptive information text carried by the synthesized picture does not block a main image area of the target processing picture includes:
determining a main image area of the target processing picture;
and superposing the first descriptive information text to an area outside the main image area in the target processing picture.
Optionally, the determining the main image area of the target processing picture includes:
determining user characteristic information of a first user inputting the first descriptive information text;
and determining the main image area on the target processing picture according to the user characteristic information of the first user.
Optionally, after the text superimposing of the first description information on the area other than the main image area in the target processing picture, the method further includes:
Determining user characteristic information of a second user currently viewing the composite picture;
determining a new main image area on the target processing picture according to the user characteristic information of the second user;
and presenting the target processing picture, and displaying the first descriptive information text in an area outside the new main image area.
Optionally, after the text superimposing of the first description information on the area other than the main image area in the target processing picture, the method further includes:
determining user characteristic information of a third user currently viewing the composite picture;
and presenting the target processing picture, and displaying the first descriptive information text in an area outside the main image area according to the user characteristic information of the third user.
Optionally, the displaying the first descriptive information text in the area outside the main image area according to the user characteristic information of the third user includes:
determining font information for the first descriptive information text according to the user characteristic information of the third user;
and presenting the target processing picture, and displaying a first descriptive information text in an area outside the main image area by using the font information.
Optionally, the first descriptive information text includes a plurality of sub descriptive information texts; the displaying the first descriptive information text in the area outside the main image area according to the user characteristic information of the third user comprises the following steps:
determining a sub-descriptive information text for current display in the first descriptive information text according to the user characteristic information of the third user;
and presenting the target processing picture, and displaying the current sub-descriptive information text for display in an area outside the main image area.
Optionally, the synthesizing the first descriptive information text and the target processing picture to obtain a synthesized picture carrying the first descriptive information text, where the first descriptive information text carried by the synthesized picture does not block a main image area of the target processing picture includes:
converting the first descriptive information text into a first color coding value sequence;
and writing a first color coding value sequence representing the first descriptive information text into the target processing picture to obtain a synthesized picture carrying the first color coding value sequence, wherein the first descriptive information text is not displayed on the target processing picture.
Optionally, the first descriptive information text includes M units of descriptive content, M being an integer greater than or equal to 1;
said text converting said first descriptive information into a first sequence of color coded values comprising;
determining a first color coding value sequence corresponding to the M unit descriptive contents according to the mapping relation between the unit descriptive content set and the color coding value set, wherein the first color coding value sequence comprises the M color coding values;
writing a first color coding value sequence representing the first descriptive information text into the target processing picture to obtain a synthesized picture carrying the first color coding value sequence, wherein the method comprises the following steps of:
and writing the first color coding value sequence into the target processing picture to obtain a synthesized picture of which the first descriptive information text is represented by the first color coding value sequence.
Optionally, after the obtaining the composite picture representing the first descriptive information text with the first color coding value sequence, the method further includes:
acquiring an operation instruction for opening the synthesized picture;
according to the mapping relation between the unit descriptive content set and the color coding value set, analyzing the first color coding value sequence in the synthesized picture to obtain the first descriptive information text comprising the M unit descriptive contents;
And outputting the first descriptive information text.
Optionally, the mapping relation is pre-established in a target input method program;
the obtaining the first descriptive information text of the target processing picture comprises the following steps:
starting the target input method program;
acquiring the first descriptive information text input by a user through the target input method program;
the determining the first color coding value sequence corresponding to the M unit descriptive contents according to the mapping relation between the unit descriptive content set and the color coding value set comprises the following steps:
and calling the mapping relation pre-established in the target input method program through the target input method program, and determining a first color coding value sequence corresponding to the M unit descriptive contents.
Optionally, the parsing the first color coding value sequence in the synthesized picture according to the mapping relationship to obtain the first descriptive information text includes:
and calling the mapping relation pre-established in the target input method program through the target input method program, and analyzing the first color coding value sequence in the synthesized picture to obtain the first descriptive information text.
Optionally, the writing the first color coding value sequence on the target processing picture includes:
and sequentially replacing the primary color coding values of the M outer boundary pixel points of the target processing picture with M color coding values of the first color coding value sequence.
Optionally, after replacing the primary color coding values of the M outer boundary pixel points of the target processing picture with the M color coding values of the first color coding value sequence in sequence, the method includes:
replacing primary color coding values of continuous N outer boundary pixel points before the M outer boundary pixel points with first separator coding values, wherein N is an integer greater than or equal to 1;
the primary color coding values of the continuous K outer boundary pixel points after the M outer boundary pixel points are replaced by second separator coding values, and K is an integer greater than or equal to 1.
Optionally, replacing primary color coding values of M outer boundary pixel points of the target processing picture with M color coding values of the first color coding value sequence sequentially includes:
setting a starting point pixel position in an outer boundary area of the target processing picture in advance;
adding a starting point position identifier at the starting point pixel position, wherein the starting point position identifier is used for representing pixel point related information needing to replace a color coding value after the starting point pixel position;
And sequentially replacing the primary color coding values of the M outer boundary pixel points behind the starting pixel position with M color coding values of the first color coding value sequence.
Optionally, the parsing the first color coding value sequence in the composite picture according to the mapping relationship between the unit description content set and the color coding value set includes:
analyzing the synthesized picture, and determining the position of the first separator coding value and the position of the second separator coding value from the synthesized picture;
determining color coding values of sequential pixel points between the positions of the first separator coding values and the positions of the second separator coding values as the first color coding value sequence;
and according to the mapping relation, analyzing the first color coding value sequence into the first descriptive information text.
Optionally, the parsing the first color coding value sequence in the composite picture according to the mapping relationship between the unit description content set and the color coding value set includes:
analyzing the starting point position identification from the starting point pixel position of the synthesized picture;
According to the starting point position mark, determining color coding values of M outer boundary pixel points behind the starting point pixel position as the first color coding value sequence;
and according to the mapping relation, analyzing the first color coding value sequence into the first descriptive information text.
Optionally, the parsing the first color coding value sequence in the composite picture according to the mapping relationship between the unit description content set and the color coding value set includes:
searching a first jumping pixel point and a second jumping pixel point from an outer boundary area of the synthesized picture;
determining color coding values of sequential pixel points between the positions of the first jumping pixel points and the positions of the second jumping pixel points as the first color coding value sequence;
and according to the mapping relation, analyzing the first color coding value sequence into the first descriptive information text.
Optionally, after the determining the first color coding value sequence corresponding to the M unit descriptions and before writing the first color coding value sequence on the target processing picture, the method further includes:
determining the number of replaceable pixel points in the target processing picture;
The number of the color coding values of the first color coding value sequence is larger than the number of the replaceable pixels, and prompt information is output, wherein the prompt information is used for indicating that the number of the color coding values of the first color coding value sequence exceeds the number of the replaceable pixels in the target processing picture.
Optionally, the determining the number of replaceable pixels of the target processing picture includes:
acquiring the total number of the pixel points of the target processing picture, and determining the product result of the total number of the pixel points and a preset proportion value as the number of the replaceable pixel points of the target processing picture; or alternatively
And determining the total number of the outer boundary pixels of the target processing picture as the number of the replaceable pixels of the target processing picture.
Optionally, before acquiring the first descriptive information text of the target processing picture, the method further includes:
monitoring whether a picture browsing event and/or a photographing behavior event exist at present;
if the picture browsing event exists currently, determining a current browsing picture of the picture browsing event as the target processing picture;
and if the photographing behavior event is monitored to exist currently, determining a current photographing picture of the photographing behavior event as the target processing picture.
Optionally, after obtaining the composite picture in which the first descriptive information text is represented by the first color coding value sequence, the method further includes:
obtaining a sharing operation of sharing the synthesized picture to a target user object by a current login user;
judging whether the target user object belongs to a preset friend list of the current login user or not;
if so, generating a transcoding mapping relation for the M unit descriptive contents according to the mapping relation, and sending the synthesized picture and the transcoding mapping relation to opposite terminal equipment where the target user object is located, so that the opposite terminal equipment analyzes a first color coding value sequence in the synthesized picture into text contents different from the first descriptive information text based on the transcoding mapping relation;
and if not, sending the synthesized picture to opposite terminal equipment where the target user object is located, so that the opposite terminal equipment analyzes a first color coding value sequence in the synthesized picture into the first descriptive information text based on the mapping relation.
Optionally, the mapping relationship between the unit description content set and the color coding value set specifically includes:
Each character in the character set and each color coding value in the color coding value set meet a one-to-one mapping relation; and/or
Each word in the word set and each color coding value in the color coding value set meet a one-to-one mapping relation; and/or
Each phrase in the phrase set and each color coding value in the color coding value set meet a one-to-one mapping relation; and/or
And each sentence in the sentence set and each color coding value in the color coding value set satisfy a one-to-one mapping relation.
Optionally, after obtaining the composite picture in which the first descriptive information text is represented by the first color coding value sequence, the method further includes:
obtaining a second descriptive information text comprising P units of descriptive content for the composite picture;
determining a second color coding value sequence corresponding to P units of descriptive contents included in the second descriptive information text according to the mapping relation, wherein P is an integer greater than or equal to 1;
replacing the first color-coded value sequence on the composite picture with the second color-coded value sequence or writing the second color-coded value sequence at a location on the target processed picture that does not belong to the first color-coded sequence;
A new composite picture is generated.
Optionally, the replacing the first color-coded value sequence on the composite picture with the second color-coded value sequence or writing the second color-coded value sequence at a position on the target processing picture that does not belong to the first color-coded value sequence includes:
judging whether the second descriptive information text and the first descriptive information text belong to the same information type or not;
if so, replacing the first color coded value sequence on the synthesized picture with the second color coded value sequence, otherwise, writing the second color coded value sequence at a position on the target processing picture which does not belong to the first color coded sequence.
Optionally, the acquiring the first descriptive information text of the target processing picture includes:
acquiring a text edited by a first user and/or;
identifying a main image area in the target processing picture, and selecting a text matched with the main image area from a preset description information text library; and/or
And acquiring the mark information text of the target processing picture.
In a second aspect, an embodiment of the present invention provides a method for processing a picture based on an input method, which is characterized by including:
Starting a target input method program, wherein a mapping relation between a unit description content set and a color coding value set is pre-established in the target input method program;
the steps of any implementation manner of the first aspect are executed based on the target input method program.
In a third aspect, an embodiment of the present invention provides a picture processing apparatus, including:
the descriptive information text acquisition unit is used for acquiring a first descriptive information text aiming at the target processing picture;
and the synthesis processing unit is used for carrying out synthesis processing on the first descriptive information text and the target processing picture to obtain a synthesized picture carrying the first descriptive information text, wherein the first descriptive information text carried by the synthesized picture does not shade a main image area of the target processing picture.
Optionally, the first descriptive information text does not cover the main image area of the target processing picture, specifically:
the first descriptive information text is positioned outside the main image area of the target processing picture, or
The first descriptive information text is implicit in a main image area of the target processing picture.
Optionally, the synthesis processing unit includes:
A region determining subunit, configured to determine a main image region of the target processing picture;
and the superposition processing subunit is used for superposing the first descriptive information text to an area outside the main image area in the target processing picture.
Optionally, the main image area determining subunit is specifically configured to:
determining user characteristic information of a first user inputting the first descriptive information text;
and determining the main image area on the target processing picture according to the user characteristic information of the first user.
Optionally, the method further comprises:
a first feature information determining unit, configured to determine user feature information of a second user currently viewing the composite picture;
a new region determining unit, configured to determine a new main image region on the target processing picture according to user feature information of the second user;
and the first display unit is used for presenting the target processing picture and displaying the first descriptive information text in an area outside the new main image area.
Optionally, the method further comprises:
a second feature information determining unit, configured to determine user feature information of a third user currently viewing the composite picture;
And the second display unit is used for presenting the target processing picture and displaying the first descriptive information text in an area outside the main image area according to the user characteristic information of the third user.
Optionally, the second display unit is specifically configured to:
determining font information for the first descriptive information text according to the user characteristic information of the third user;
and presenting the target processing picture, and displaying a first descriptive information text in an area outside the main image area by using the font information.
Optionally, the second display unit is further specifically configured to:
determining a sub-descriptive information text for current display in the first descriptive information text according to the user characteristic information of the third user;
and presenting the target processing picture, and displaying the current sub-descriptive information text for display in an area outside the main image area.
Optionally, the synthesis processing unit includes:
a conversion subunit, configured to convert the first description information text into a first color coding value sequence;
and the writing subunit is used for writing a first color coding value sequence representing the first descriptive information text in the target processing picture to obtain a synthesized picture carrying the first color coding value sequence, wherein the first descriptive information text is not displayed on the target processing picture.
Optionally, the first descriptive information text includes M units of descriptive content, M being an integer greater than or equal to 1;
the conversion subunit is specifically configured to: determining a first color coding value sequence corresponding to the M unit descriptive contents according to the mapping relation between the unit descriptive content set and the color coding value set, wherein the first color coding value sequence comprises the M color coding values;
the writing subunit is specifically configured to: and writing the first color coding value sequence into the target processing picture to obtain a synthesized picture of which the first descriptive information text is represented by the first color coding value sequence.
Optionally, the method further comprises:
the instruction obtaining module is used for obtaining an operation instruction for opening the synthesized picture;
the color coding analysis module is used for analyzing the first color coding value sequence in the synthesized picture according to the mapping relation between the unit descriptive content set and the color coding value set to obtain the first descriptive information text comprising the M unit descriptive contents;
and the text output module is used for outputting the first descriptive information text.
Optionally, the mapping relation is pre-established in a target input method program;
The descriptive information text acquisition unit is specifically configured to:
starting the target input method program;
acquiring the first descriptive information text input by a user through the target input method program;
the determining the first color coding value sequence corresponding to the M unit descriptive contents according to the mapping relation between the unit descriptive content set and the color coding value set comprises the following steps:
and calling the mapping relation pre-established in the target input method program through the target input method program, and determining a first color coding value sequence corresponding to the M unit descriptive contents.
Optionally, the color coding parsing module is specifically configured to:
and calling the mapping relation pre-established in the target input method program through the target input method program, and analyzing the first color coding value sequence in the synthesized picture to obtain the first descriptive information text.
Optionally, the conversion subunit includes:
and the color coding substitution unit is used for sequentially substituting the primary color coding values of the M outer boundary pixel points of the target processing picture with the M color coding values of the first color coding value sequence.
Optionally, the method further comprises:
the front separator setting module is used for replacing primary color coding values of continuous N outer boundary pixel points before the M outer boundary pixel points with first separator coding values, wherein N is an integer greater than 1;
and the post-separator setting module is used for replacing primary color coding values of the continuous K outer boundary pixel points after the M outer boundary pixel points with second separator coding values, wherein K is an integer greater than 1.
Optionally, the color coding substitution unit is specifically configured to:
setting a starting point pixel position in an outer boundary area of the target processing picture in advance;
adding a starting point position identifier at the starting point pixel position, wherein the starting point position identifier is used for representing pixel point related information needing to replace a color coding value after the starting point pixel position;
and sequentially replacing the primary color coding values of the M outer boundary pixel points behind the starting pixel position with M color coding values of the first color coding value sequence.
Optionally, the color coding parsing module includes:
the separator position determining unit is used for analyzing the synthesized picture and determining the position of the first separator coding value and the position of the second separator coding value from the synthesized picture;
A first sequence determining unit, configured to determine color coding values of sequential pixels between a position where the first separator coding value is located and a position where the second separator coding value is located as the first color coding value sequence;
and the first analysis text unit is used for analyzing the first color coding value sequence into the first descriptive information text according to the mapping relation.
Optionally, the color coding parsing module includes:
the identification analysis unit is used for analyzing the starting point position identification from the starting point pixel position of the synthesized picture;
a second sequence determining unit, configured to determine, according to the start position identifier, color coding values of M outer boundary pixel points after the start pixel position as the first color coding value sequence;
and the second analysis text unit is used for analyzing the first color coding value sequence into the first descriptive information text according to the mapping relation.
Optionally, the color coding parsing module includes:
the jumping pixel point searching unit is used for searching a first jumping pixel point and a second jumping pixel point from the outer boundary area of the synthesized picture;
A third sequence determining unit, configured to determine color coding values of sequential pixel points between the position where the first jumping pixel point is located and the position where the second jumping pixel point is located as the first color coding value sequence;
and the third analysis text unit is used for analyzing the first color coding value sequence into the first descriptive information text according to the mapping relation.
Optionally, the method further comprises:
the pixel point number determining module is used for determining the number of replaceable pixel points in the target processing picture;
the judging module is used for judging whether the number of the color coding values of the first color coding value sequence is larger than the number of the replaceable pixel points or not;
and the prompt information output module is used for outputting the prompt information that the number of the color coding values of the first color coding value sequence exceeds the number of the replaceable pixel points in the target processing picture if the judgment result of the judgment module is yes.
Optionally, the pixel number determining module is specifically configured to:
acquiring the total number of the pixel points of the target processing picture, and determining the product result of the total number of the pixel points and a preset proportion value as the number of the replaceable pixel points of the target processing picture; or alternatively
And determining the total number of the outer boundary pixels of the target processing picture as the number of the replaceable pixels of the target processing picture.
Optionally, the apparatus further includes:
the monitoring unit is used for monitoring whether a picture browsing event and/or a photographing behavior event exist at present;
a browsing picture determining unit, configured to determine, if the picture browsing event currently exists, a current browsing picture of the picture browsing event as the target processing picture;
and the shooting picture determining unit is used for determining the current shooting picture of the shooting behavior event as the target processing picture if the shooting behavior event is monitored to exist currently.
Optionally, the method further comprises:
the operation obtaining module is used for obtaining the sharing operation of sharing the synthesized picture to the target user object by the current login user;
the friend judging module is used for judging whether the target user object belongs to a preset friend list of the current login user or not;
a picture sending unit, configured to generate a transcoding mapping relationship according to the mapping relationship if the determination result of the friend determination module is yes, and send the synthesized picture and the transcoding mapping relationship to an opposite terminal device where the target user object is located, so that the opposite terminal device parses a first color coding value sequence in the synthesized picture into text content different from the first descriptive information text based on the transcoding mapping relationship; and if the judging result of the friend judging module is not yes, sending the synthesized picture to opposite terminal equipment where the target user object is located, so that the opposite terminal equipment analyzes a first color coding value sequence in the synthesized picture into the first descriptive information text based on the mapping relation.
Optionally, the mapping relationship between the unit description content set and the color coding value set specifically includes:
each character in the character set and each color coding value in the color coding value set meet a one-to-one mapping relation; or alternatively
Each word in the word set and each color coding value in the color coding value set meet a one-to-one mapping relation; or alternatively
Each phrase in the phrase set and each color coding value in the color coding value set meet a one-to-one mapping relation; or alternatively
And each sentence in the sentence set and each color coding value in the color coding value set satisfy a one-to-one mapping relation.
Optionally, the method further comprises:
a text obtaining module, configured to obtain a second description information text including P units of description content for the composite picture;
the coding determining module is used for determining a second color coding value sequence corresponding to P units of descriptive contents included in the second descriptive information text according to the mapping relation, wherein P is an integer greater than 1;
and the code writing module is used for replacing the first color coding value sequence on the synthesized picture with the second color coding value sequence or writing the second color coding value sequence at a position which does not belong to the first color coding sequence on the target processing picture so as to generate a new synthesized picture.
In a fourth aspect, an embodiment of the present invention provides an input method system, including a picture processing device according to any one of the implementation manners of the third aspect.
In a fifth aspect, an embodiment of the present invention provides a computer readable storage medium, on which a computer program is stored, which program, when being executed by a processor, implements the steps of any implementation manner of the first aspect.
In a sixth aspect, an embodiment of the present invention provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of any implementation manner of the first aspect when the program is executed.
The one or more technical schemes provided by the embodiment of the invention at least realize the following technical effects or advantages:
acquiring a first descriptive information text aiming at a target processing picture; and synthesizing the first descriptive information text and the target processing picture to obtain a synthesized picture carrying the first descriptive information text, wherein the first descriptive information text carried by the synthesized picture does not shade a main image area of the target processing picture, so that the first descriptive information text does not need to be interfered by a user, and the first descriptive information text and the main image area of the target processing picture are automatically not mutually influenced, and therefore, the display quality of the picture is not damaged.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a picture processing method according to an embodiment of the present invention;
FIG. 2 is a diagram showing the position indication of the starting pixel position according to an embodiment of the present invention;
FIG. 3a is a schematic diagram of a target processing image according to an embodiment of the present invention;
FIG. 3b is a schematic diagram of a synthesized image according to an embodiment of the present invention;
FIG. 3c is an enlarged view of the information area depicted in FIG. 3 b;
FIG. 4 is a schematic diagram illustrating a positional relationship between a first color code value sequence and first and second separator code values according to an embodiment of the present invention;
FIG. 5 is a block diagram of a picture processing apparatus according to an embodiment of the present invention;
fig. 6 is a physical block diagram of a picture processing device according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a picture processing method and a picture processing device, which aim to solve the technical problem that descriptive information damages the display effect of pictures, and the general thought is as follows:
And writing a color coding value sequence representing the descriptive information text on the target processing picture to obtain a composite picture. The real characters do not need to be written on the picture, and the color coding value on the picture only needs to occupy a very small number of pixel points for representing the descriptive information, so that the change of the display effect of the picture can not be ignored visually, and the display quality of the picture can not be damaged.
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. The embodiments of the present invention and technical features in the embodiments may be combined with each other without collision. All other embodiments obtained by those skilled in the art based on the embodiments of the present invention without creative efforts fall within the protection scope of the present invention.
The image processing method provided by the embodiment of the invention is applied to user equipment, and can also be applied to APP (Application) of the user equipment, wherein the user equipment can be: a device having a picture presentation function, such as: smart phones, tablet computers, etc.
Referring to fig. 1, a picture processing method provided by an embodiment of the present invention includes the following steps:
s101, acquiring a first descriptive information text aiming at a target processing picture.
Specifically, the first descriptive information text of the target processing picture is obtained, which may be specifically obtained by one or more of the following modes:
and in the first mode, acquiring the text edited by the first user. Specifically, a text edited by the first user through a typing operation or a voice input operation is acquired.
And secondly, identifying a main image area in the target processing picture, and selecting a text matched with the main image area from a preset description information text library.
Specifically, the preset description information text library includes a plurality of description information texts, for example, if the main image area is a landscape image, the description text corresponding to the landscape image is selected from the preset description information text library, and if the main image area is an animal image, the description text corresponding to the animal image is selected from the preset description information text library. If the main image area is a food image, a description text corresponding to the food image is selected from a preset description information text library.
And thirdly, acquiring the mark information text of the target processing picture.
Specifically, the mark information text of the target processing picture is the time, place, event, etc. related to the target processing picture. Specifically, the generation time, the generation place and the related event of the target processing picture can be the current time and the current place.
Specifically, the target processing picture can be obtained from one application program of a photographing application program, a picture browsing application program and a video playing application program, and specifically, there can be a plurality of scenes, and three implementation scenes are given below:
scene one: and monitoring whether a picture browsing event exists. The picture browsing event may be: and browsing the local pictures stored on the user equipment, and browsing the network pictures through the user equipment. Specifically, whether a picture browsing event exists in the picture browsing application program currently is monitored, and if so, a current browsing picture of the picture browsing event is determined as a target processing picture.
Scene II: and monitoring whether a photographing behavior event exists. Specifically, whether a picture browsing event exists in the photographing application program currently is monitored, and if the picture browsing event exists, a current photographing picture of the photographing behavior event is determined to be a target processing picture.
Scene III: monitoring whether a video play event exists. Specifically, whether a video playing event exists in the video playing application program is monitored, and if so, the current video frame picture extracted from the video playing event is determined as a target processing picture.
In the implementation process, the method can be applied to one or two scenes, or can be simultaneously applied to the three scenes. It should be noted that, the first, second, and third scenes are only illustrative, and in the implementation process, the method is not limited to the three scenes, and the determination of the picture to be presented on the display interface of the user equipment in any scene may be determined as the target processing picture.
After step S101, step S102 is performed, where the first descriptive information text and the target processing picture are subjected to synthesis processing, so as to obtain a synthesized picture carried in the first descriptive information text, where the first descriptive information text carried in the synthesized picture does not obscure the main image area of the target processing picture.
Specifically, the first description information text does not cover the main image area of the target processing picture, which may specifically be: the first descriptive information text is located outside the main image area of the target processing picture or is implicit in the main image area of the target processing picture.
In order to facilitate processing the target processing picture into a composite picture, the input method is combined with the writing of the first descriptive information text. Specifically, a target input method program may be installed on the user device, after the target processing picture is obtained, the target input method program is started, and the processing of step S102 is performed on the target processing picture by the target input method program, so as to obtain the composite picture.
In particular, in order to implicit the first descriptive information text in the main image area of the target processing picture, the first descriptive information text may be represented in the main image area of the target processing picture by a color-coded value sequence such that the first descriptive information text is not displayed in the target processing picture.
Specifically, in order to locate the first descriptive information text outside the main image area of the target processing picture, the embodiment may be: and representing the first descriptive information text outside the main image area of the target processing picture by using the color coding value sequence, and overlapping the first descriptive information text in an area outside the main image area so as to avoid the main image area.
Wherein, the first descriptive information text is superimposed in the area outside the main image area, and the specific implementation process is steps S1021 to S1022:
Step S1021, determining a main image area of the target processing picture.
Step S1022, superimposing the text of the first description information on the area other than the main image area in the target processing picture.
Specifically, the first description information text may cover an area outside the main image area by using a floating layer, or may directly occupy RGB values of pixel points on the area outside the main image area.
The main image area of the target processing picture can be determined by determining the same main image area for different users, or determining different main image areas for different users. The method is realized by the following steps: user characteristic information of a first user inputting the first descriptive information text is input; and determining the main image area on the target processing picture according to the user characteristic information of the first user.
Specifically, the main image area corresponds to user characteristic information of the first user. In the implementation process, the user characteristic information of the third user is determined according to the collected user information of the first user. Specifically, the user information of the first user may be reported by the respective user. The user outside the first user may be a friend of the first user. The user characteristic information of the first user may also be a preset user level.
In order to realize that different users see the same descriptive information text on the target processing picture at different positions, after obtaining a synthesized picture carrying the first descriptive information text, determining user characteristic information of a second user currently viewing the synthesized picture; determining a new main image area on the target processing picture according to the user characteristic information of the second user; and presenting the target processing picture, and displaying the first descriptive information text in an area outside the new main image area.
For example, the target processing picture includes a person a and an animal B, and if the main image area is determined to be the person a according to the user characteristic information of the first user, the first descriptive text is displayed in an area other than the person a. And if the user characteristic information of the second user is 'pet lovers', determining that the new main image area is the animal B according to the user characteristic information of the second user, and displaying the first descriptive information text displayed to the second user in an area except the animal B.
Specifically, the first descriptive information text presented to different users may be the same or different. In order to realize that the first descriptive information text is different for different users, the flow is as follows: determining user characteristic information of a third user currently viewing the composite picture; and presenting the target processing picture, and displaying the first descriptive information text in an area outside the main image area according to the user characteristic information of the third user.
In particular, the difference in the text of the presentation first description information may include a difference in font information and/or a difference in text content.
In order to realize that the font information when different users see the first descriptive information text is different, the method can be realized by the following procedures: determining font information for the first descriptive information text according to the user characteristic information of the third user; and presenting the target processing picture, and displaying a first descriptive information text in an area outside the main image area by using the font information.
The font information may include one or more of a character color, a character size, and a handwriting type. Specifically, the color, the word size and the handwriting type of the first descriptive information text can be changed by modifying the color coding value of the pixel point occupied by the first descriptive information text.
In implementations, the user characteristic information of the third user may include user preferences, user age, and so forth. The color and the handwriting type for the first descriptive text are determined according to the user preference of the third user, and the target processing picture is presented based on the color and the handwriting type for the first descriptive text. And determining the word size matched with the user age of the third user according to the corresponding relation between the user age and the word size, and presenting the target processing picture based on the matched word size.
In order to realize that different users see different text contents of the first descriptive information text, the method can be realized by the following procedures: determining a sub-descriptive information text for current display in the first descriptive information text according to the user characteristic information of the third user; and presenting the target processing picture, and displaying the current sub-descriptive information text for display in an area outside the main image area.
The user characteristic information of the third user is determined according to the collected user information of the third user. Specifically, the user information of the third user may be reported by each user. In this embodiment, if the first descriptive text is overlaid in a floating layer form in an area other than the main image area, the floating layer of the original first descriptive text is removed, and the floating layer of the first descriptive text displayed in the new font information is overlaid in an area other than the main image area.
Specifically, the text content differs in that: some, all, or none of the first descriptive text is displayed.
For example, the first descriptive information text may include basic information and private information, the basic information may be general tag information such as a location, a time, a position, and the like, and the private information is non-general information input by a user.
The user characteristic information of the third user can be various, and description is made respectively:
the user characteristic information of the first user and the third user is specifically user level, the first descriptive information text comprises a plurality of sub descriptive information texts, each sub descriptive information has information level, a plurality of user levels can be established in advance, the information level matched with each user level is set, and the higher the user level is, the more sub descriptive information is presented, and vice versa, the less sub descriptive information is presented; and determining each sub-descriptive information text with the information level matched with the user level of the third user from the first descriptive information text, and displaying the sub-descriptive information text in an area outside the main image area.
For the first descriptive information text may contain basic information and privacy information, three user levels may be set, the first user level displaying all of the first descriptive information text: namely basic information and privacy information; a second user level lower than the first user level displays only basic information in the first descriptive information text; a third user level lower than the second user level does not display the first descriptive text.
The user characteristic information of the second and third users is specifically: and displaying part, all or none of the first descriptive information text according to the relation information between the third user and the first user. The closer the relation represented by the relation information is, the more the sub description information in the text of the first description information is displayed, and the fewer the sub description information is otherwise.
In an implementation, the relationship information may be affinity. The higher the affinity between the third user and the first user, the more sub-descriptive information in the displayed first descriptive information text, and vice versa. And the affinity may be determined based on a frequency of information interaction between the third user and the first user. For example, the affinity can be defined to be between 0 and 100 according to the information interaction frequency, and the affinity between users is correspondingly increased or decreased according to the increase or decrease of the information interaction frequency, so that the affinity is continuously updated, and the content of the first descriptive information text can be seen by a third user.
Aiming at the examples that the first descriptive information text contains basic information and privacy information, if the intimacy between the third user and the first user is lower than a preset lower limit value, the basic information and the privacy information are not displayed; if the intimacy between the third user and the first user is higher than the preset upper limit value, displaying all the first descriptive information text: namely, basic information and privacy information are displayed simultaneously; if neither is satisfied, only the basic information is displayed, and the private information is not displayed.
In particular, the relationship information may also be a relationship category between the third user and the first user, such as a relationship category of relatives, friends, classmates, colleagues, strangers, etc. According to different relation types, the quantity of the sub description information in the displayed first description information text is different. For example, the ranking of how much the sub-description information is displayed in the first description information text may be: relative > friend=classmate=colleague > stranger, and may be: relative > friend > colleague > stranger, and the ordering is not limited one by one.
For the example that the first descriptive information text contains basic information and privacy information, if the third user is relative to the first user, displaying all the first descriptive information text: namely, basic information and privacy information are displayed simultaneously; if the third user and the first user are classmates, friends or colleagues, only basic information is displayed, but privacy information is not displayed; if the information is stranger, the basic information and the privacy information are not displayed.
In this embodiment, there are various implementations of displaying different text contents in the first description information, where each sub-description information text in the first description information text is displayed in an area outside the main image area by a floating layer, and may be: and removing the sub-descriptive information text which does not correspond to the user level of the third user. If the first descriptive information text is not present in the target processing picture in a floating layer, the method may be as follows: and shielding the sub-descriptive information text which does not correspond to the user level of the third user through the blank floating layer. Thus, the display of part, all or none of the first descriptive information text can be realized
It should be noted that, the first user is a main user, that is, a user who provides the text of the first description information, and the second to third users may be the same or different users from the first user.
Through steps S1021 to S1022, the first description information text can automatically avoid the main image area in the target processing picture. Therefore, the main image area is not affected all the time, and the damage of the descriptive information in the picture to the picture quality is reduced.
Representing the first descriptive information text in the target processing picture by using the color coding value sequence so that the first descriptive information text is not displayed in the target processing picture, wherein the specific implementation method can be as follows:
and S1021', converting the first descriptive information text into a first color coding value sequence.
And step S1022', writing a first color coding value sequence representing the first descriptive information text into the target processing picture to obtain a synthesized picture carrying the first color coding value sequence.
Specifically, the first description information text includes M unit description contents, where M is an integer greater than or equal to 1, and step S1021' specifically includes: and determining a first color coding value sequence corresponding to the M unit descriptive contents according to the mapping relation between the unit descriptive content sets and the color coding value sets. Step S1022' is specifically: and writing a first color coding value sequence on the target processing picture to obtain a synthesized picture which represents the first descriptive information text by the first color coding value sequence.
The unit description may be defined as follows:
definition one: the unit description is defined as: a single character. The individual characters herein may be letters, numbers, kanji, symbols. For example, the first descriptive text is specifically "Qinghai lake", and then "Qinghai lake" includes 3 characters, that is, includes three units of descriptive content. For another example, the first descriptive information text is specifically "blue sky", and the "blue sky" includes 2 characters, that is, includes 2 units of descriptive content.
Definition two: the unit description is defined as: single word: for example, the first descriptive text is specifically "Qinghai lake," which includes a word, i.e., 1 unit of descriptive content. For example, the first descriptive information text specifically includes: "blue sky and white cloud" includes 2 words, i.e., includes 2 units of descriptive content.
Definition three: the unit description is defined as: a single phrase. It should be noted that a phrase is a combination of two or more words, for example, the first descriptive information text is specifically "at home", and "at home" includes a phrase, that is, "at home" includes 1 unit descriptive content. For another example, the first descriptive information text is specifically "singing actor", and the "singing actor" includes 1 phrase, that is, the "singing actor" includes 1 unit descriptive content.
Definition four: the unit description is defined as: single sentence: the single sentence comprises at least the subject + predicate, and may also comprise other sentence components on this basis. For example, the first descriptive information text is specifically "i am watching television", and then includes 1 sentence, that is, includes 1 unit descriptive content.
Specifically, the first descriptive information text including M units of descriptive content may be input by voice or key input through the target input method program.
Specifically, the unit describes the mapping relationship between the content set and the color coding value set, specifically referred to as: the unit descriptive content set comprises a plurality of unit descriptive contents, the color coding value set comprises a plurality of color coding values, the number of the color coding values contained in the color coding value set is the same as the number of the unit descriptive contents contained in the unit descriptive content set, each color coding value in the color coding value set has a one-to-one mapping relation with each unit descriptive content in the unit descriptive content set, and each color coding value in the color coding value set has a unique color coding value and cannot be repeated.
Specifically, the color-coded value may be an RGB (red green blue) color-coded value. Of course, in the implementation process, the color-coded value may also be a color-coded value in other color modes, for example, a Lab color-coded value, a CMYK (Cyan Magenta Yellow Key print color mode) color-coded value, and an HSB (hues saturation brightness, a color mode) color-coded value, where the color-coded value is specifically determined according to the color mode of the target processing picture, so as to ensure that the color-coded value matches the color mode of the target processing picture.
In the implementation process, for different definitions of the unit description contents, the unit description content set also has various forms, and the mapping relationship between the unit description content set and the color coding value set is also different correspondingly, which are respectively described in detail below:
the first form of the unit description content set is specifically: the character set comprises various different types of characters such as letters, numbers, chinese characters, symbols and the like, and the characters in the character set and the color coding values in the color coding value set meet a one-to-one mapping relation.
Taking the color coding value as the RGB coding value as an example, since the RGB color mode can be represented
256×256 is about 1600 ten thousand characters, and the number of kanji characters is less than 10 ten thousand, even if the kanji characters comprise letters, numbers, kanji characters and symbols, the number of characters is far less than 1600 ten thousand, so that the color coding values of various characters can be coded and stored based on the RGB color coding values, each character in the character set has a unique RGB coding value, and multilingual characters can be fully added into the mapping relation, thereby supporting multilingual picture information description.
Taking the Qinghai lake as an example, the RGB code values of each Chinese character in the Qinghai lake may be: cyan= {11,32,43}, sea= {51,15,96}, lake= {221,223,1}, so that three kanji characters of "cyan", "sea", "lake" are mapped with three sets of RGB code values one by one, and a first color code sequence corresponding to "Qinghai lake" is: {11,32,43},{51,15,96},{221,223,1}. Therefore, writing of the description information can be satisfied by only taking 3 pixel points.
The second form of the unit description content set is specifically: a set of words. Specifically, the word set may include words such as chinese words and english words, and each word in the word set and each color coded value in the color coded value set satisfy a one-to-one mapping relationship.
Specifically, words may be stored with unique codes based on RGB code values such that each word in the set of words has a unique RGB code value.
Taking "blue sky and white cloud" as an example, the RGB encoding values of each word in the "blue sky and white cloud" may be: "blue sky" = {12,1,43}, and "white cloud" = {21,12,45}, so that the two words of "blue sky" and "white cloud" have a one-to-one mapping relationship with two sets of RGB encoded values, and the first color encoded sequence corresponding to "blue sky and white cloud" is: {12,1,43},{21,12,45}. Therefore, only two pixel points are occupied, and writing of the pair description information is satisfied.
Form three of the unit description content set is specifically: the phrase set, each phrase in the phrase set and each color coding value in the color coding value set meet the one-to-one mapping relation.
In particular, phrases may be stored with unique codes based on RGB code values such that each phrase in the phrase set has a unique RGB code value.
For example, taking the phrase "at home" as an example, including 1 phrase, the corresponding unique RGB encoded values may be: "at home" = {11,9,234}. The corresponding first color code sequence "at home" is: {11,9,234}, therefore, only one pixel is occupied to satisfy the writing of the description information.
The unit description content set is in a fourth form, and specifically comprises: and the sentence sets are in one-to-one mapping relation with each sentence in the sentence sets and each color coding value in the color coding value sets. Specifically, the sentences may be uniquely encoded based on RGB encoded values such that each sentence in the set of sentences has a unique RGB encoded value.
For example, in one sentence "I are watching TV". The unique RGB code value corresponding to "i am watching television" may be: "i am watching tv" = {168,0, 24}. The first color code sequence corresponding to "i am watching television" is: {168,0, 24}, therefore, only one pixel is occupied to meet the requirement of writing the description information of a plurality of words, thereby reducing the change of pictures and being more beneficial to preserving the original effect of the pictures.
It should be noted that, in the implementation process, the unit description content set may be applied to only one of the above four forms, or may include four forms at the same time. If the unit description content set includes the above four forms, the specific implementation procedure of step S1021' is as follows:
Searching a first color coding value sequence corresponding to the first descriptive information text in the mapping relation between the sentence set and the color coding value set; if the search result is null, searching a first color coding value sequence corresponding to the first descriptive information text in a mapping relation between the phrase set and the color coding value set; if the search result is null, searching a first color coding value sequence corresponding to the first descriptive information text in a mapping relation between the word set and the color coding value set; if the search result is null, searching a first color coding value sequence corresponding to the first descriptive information text in the mapping relation between the character set and the color coding value set, and searching the first color coding value sequence corresponding to the first descriptive information text in the sequence. In this way, the number of pixels in the picture occupied by the description information can be reduced as much as possible, and the color coding value representing the description information can be ensured to be found through the above-mentioned searching sequence.
Next, a process of executing step S1021' by the target input method program will be described: and calling a mapping relation between a unit description content set and a color coding value set which are pre-established in the target input method program through the target input method program, and determining a first color coding value sequence corresponding to the M unit description contents. It should be noted that, the specific implementation process of the step S1021' is referred to the foregoing description, and is not repeated here for brevity of description.
In step S1022', a first color coding value sequence is written on the target processing picture, which may specifically be: the number of pixel points for replacing the primary color coding values on the target processing picture is the same as the number of groups of color coding values contained in the first color coding value sequence. For example, if the color code value of the first color code value sequence is 3 groups, the primary color code value on 3 pixel points on the target processing picture is replaced. And if the color coding value of the first color coding value sequence is 2 groups, replacing the primary color coding values on 2 pixel points on the target processing picture.
In order to further reduce the adverse effect of writing the first color coding value sequence on the picture quality, the primary color coding values of the M outer boundary pixel points of the target processing picture are replaced by the M color coding values of the first color coding value sequence in sequence. Since the outer boundary pixel points are generally not easily perceived by the vision, a small amount of color coding values of the outer boundary pixel points are changed, and the picture quality is not affected. It should be noted that, the target processes the outer boundary pixel point of the picture. The outer boundary pixel points specifically are as follows: and processing the part of the pixels of the picture at least on one side of which there is no adjacent pixel.
Further, in order to more effectively distinguish the first color coding value sequence from the color coding values that are not replaced in the synthesized picture during parsing, and effectively identify whether the picture already has descriptive information, various embodiments may be provided:
one embodiment is: setting a starting point pixel position in an outer boundary area of a target processing picture in advance; adding a start position identifier at the start pixel position, wherein the start position identifier is used for representing pixel point related information requiring replacement of the color coding value after the start pixel position; and sequentially replacing the primary color coding values of the M outer boundary pixel points behind the starting pixel position with M color coding values of the first color coding value sequence.
In a specific implementation process, the pixel point related information may be: if the number M of pixels of the color-coded value needs to be replaced after the starting pixel position, the starting pixel position is identified as the color-coded value representing the number M of pixels, for example, the starting pixel position at the starting pixel position is identified as the color-coded value corresponding to the value 3. The primary color coded values of the 3 outer boundary pixel points after the starting pixel position are sequentially replaced with 3 color coded values of the first color coded value sequence.
In a specific implementation process, the pixel point related information may be: the position coordinates of the last pixel point in the pixel points of the color coded values need to be replaced, and the color coded values of the position coordinates are represented.
In the implementation process, the pixel point related information may also be: and if the coordinate difference value between the position coordinate of the last pixel point in the pixel points of the color coding value and the position coordinate of the starting pixel position needs to be replaced, the starting position is identified as the color coding value representing the coordinate difference value.
In the implementation process, from the outer boundary pixel point after the starting point pixel position is set, the primary color coding values of the M outer boundary pixel points are sequentially replaced with M color coding values of the first color coding value sequence clockwise or anticlockwise, so that the starting position of the first color coding value sequence can be accurately known when the analysis and synthesis picture is carried out later, the starting point pixel position is written with the color coding value representing the value M, the color coding values of how many pixel points are replaced are known, and the first description information text can be accurately analyzed by combining the two points, so that the description information in the picture can be accurately obtained.
Specifically, referring to fig. 2, one small square in fig. 2 represents 1 pixel, and the starting pixel position may be: an outer boundary pixel point a at the upper left corner, an outer boundary pixel point b at the lower left corner, an outer boundary pixel point c at the upper right corner, and a pixel point d at the lower right corner.
Next, the implementation effect of step S1022 'will be described with reference to fig. 3a to 3b, for example, if the target processing picture is as shown in fig. 3a, then step S1022' is performed: writing a first color coded value sequence on the target processed picture: "on the road" = {10,21,234}, {4,156,86}, {141,78,179}, the resulting composite picture is shown with reference to fig. 3 b. Next, referring to fig. 3c, it can be seen that, starting from the outer boundary pixel point of the lower left corner of the composite picture, the primary color coding values of the 3 pixels are replaced clockwise with {10,21,234}, {4,156,86}, {141,78,179}.
Another embodiment may be: and adding a specific separator coding value between the pixel point where the first color coding value sequence is located in the synthesized picture and the pixel point where the color coding value which is not replaced in the synthesized picture is located. Specifically, the method is realized through the following steps A to B:
step A: and replacing the primary color coding values of the continuous N outer boundary pixel points before the M outer boundary pixel points with the first separator coding values, wherein N is an integer greater than 1.
And (B) step (B): the primary color coded values of consecutive K outer boundary pixel points after the M outer boundary pixel points are replaced with second separator coded values, K being an integer greater than 1.
It should be noted that, in the specific implementation process, the step a and the step B may be performed in any order or simultaneously.
Specifically, the first separator code value and the second separator code value may be the same, for example, they may be: {0, 0}, or {255, 255, 255}, the first and second separator code values may also be different, e.g., the first separator code value is {0, 0}, and the second separator code value is {255, 255, 255}. The first separator code value and the second separator code value are color code values that do not have a mapping relation with the unit description content, so that the separator code value is prevented from being erroneously recognized as the content in the description information when the analysis is performed later.
For example, referring to fig. 4, the primary color coding values of 5 outer boundary pixels before the M outer boundary pixels where the first color coding value is located may be replaced with the first separator coding values, and the primary color coding values of 5 outer boundary pixels after the M outer boundary pixels where the first color coding value is located may be replaced with the second separator coding values.
Next, the process of executing step S1022' by the target input method program will be described: and writing a first color coding value sequence into the target processing picture through a target input method to obtain a synthesized picture which represents the first descriptive information text by the first color coding value sequence.
In the implementation process, the implementation process of writing the first color coding value sequence on the target processing picture through the target input method may refer to the foregoing specific description of step S1022', and for brevity of description, details are not repeated here.
The synthesized picture obtained in step S1022' may be sent to the user equipment at the opposite end, so as to realize sharing of the picture with information, and may also be stored in the user equipment.
After step S1022', if the description information of the picture needs to be presented, the following steps S104 to S105 are specifically implemented:
step S103: and acquiring an operation instruction for opening the synthesized picture, and analyzing a first color coding value sequence in the synthesized picture according to the mapping relation between the unit descriptive content set and the color coding value set to acquire a first descriptive information text comprising M unit descriptive contents.
Specifically, in step S103, it may be: and calling a mapping relation between a unit description content set and a color coding value set which are pre-established in the target input method program through the target input method program, and analyzing a first color coding value sequence in the synthesized picture to obtain a first description information text.
Of course, the mapping relationship between the unit description content set and the color coding value set may be pre-established in the image processing application, and in step S103, it is: and analyzing the first color coding value sequence in the synthesized picture by calling a mapping relation between a unit description content set and a color coding value set which are pre-established in the image processing application, so as to obtain a first description information text.
In the specific implementation process, whether the step S104 is executed by the target input method program or the step S103 is completed by the image processing application, specifically: and analyzing the first color coding value sequence in the synthesized picture according to the mapping relation to obtain a first descriptive information text.
Next, for analyzing the first color coding value sequence in the synthesized picture according to the mapping relation, a first description information text is obtained, and various specific implementation modes are provided:
the specific implementation manner of step S103 is as follows: analyzing a starting point position mark from a starting point pixel position of the synthesized picture; according to the starting point position identification, determining color coding values of M outer boundary pixel points behind the starting point pixel position as a first color coding value sequence; and according to the mapping relation, analyzing the first color coding value sequence into a first descriptive information text.
Specifically, if the start position identifies the number M of pixels representing a color-coded value that needs to be replaced after the start pixel position, the number M of pixels that need to be replaced after the start pixel position is determined based on the color-coded value at the start pixel position, and then, the color-coded values of the M pixels after the start pixel position are determined as the first color-coded value sequence.
If the starting point position mark represents the position coordinate of the last pixel point in all pixel points needing to replace the color coding value, determining the position coordinate of the pixel point where the last color coding value in the first color coding value sequence is located based on the color coding value at the starting point pixel position, and then sequentially determining the color coding value of all pixel points after the starting point pixel position until the color coding value of the pixel point where the position coordinate is located is determined, wherein all the determined color coding values are jointly determined to be the first color coding value sequence.
If the coordinate difference value between the position coordinate of the last pixel point in each pixel point of the color coding values and the position coordinate of the starting pixel position needs to be replaced, determining the coordinate difference value based on the color coding value at the starting pixel position, and determining the position coordinate of the pixel point where the last color coding value in the first color coding value sequence is located based on the coordinate difference value and the position coordinate of the starting pixel position. And sequentially determining color coding values of all pixel points after the pixel position of the starting point until the color coding value of the pixel point where the position coordinate is located is determined, wherein all the determined color coding values are jointly determined to be a first color coding value sequence. .
If the synthesized picture obtained in step S103 does not have a starting pixel position, the specific implementation manner of step S103 is as follows: analyzing the synthesized picture, and determining the position of the first separator coding value and the position of the second separator coding value from the synthesized picture; determining color coding values of sequential pixel points between the position of the first separator coding value and the position of the second separator coding value as a first color coding value sequence; and according to the mapping relation between the unit descriptive content set and the color coding value set, analyzing the first color coding value sequence into a first descriptive information text.
Specifically, from the identification of the first separator coding value, sequentially analyzing the color coding values of the outer boundary pixel points after the position of the first separator coding value according to the mapping relation between the unit description content set and the color coding value set until the identification of the second separator coding value is finished, determining the M identified color coding values after the position of the first separator coding value and before the position of the second separator coding value as a first color coding value sequence, wherein the first color coding value sequence corresponds to a first description information text comprising M unit description contents.
One embodiment of step S103 is also provided as: searching a first jumping pixel point and a second jumping pixel point from an outer boundary area of the synthesized picture; determining color coding values of sequential pixel points between the position of the first jumping pixel point and the position of the second jumping pixel point as a first color coding value sequence; and according to the mapping relation, analyzing the first color coding value sequence into a first descriptive information text.
Specifically, the first jumping pixel point is the pixel point where the first color coding value in the first color coding value sequence is located, and the second jumping pixel point is the pixel point where the second color coding value in the first color coding value sequence is located. Note that, the skip pixel is a pixel whose color code value and the color code value of the previous pixel exceed a predetermined difference value.
In a specific implementation process, after the first jumping pixel point is found, sequentially finding subsequent pixel points until a preset number of non-jumping pixel points are found after the first jumping pixel point, and determining one jumping pixel point adjacent to the preset number of non-jumping pixel points as a second jumping pixel point.
It should be noted that, the embodiment of obtaining the first description information text based on the skip pixel point analysis may be performed when the first separator coding value and the second separator coding value are not found in the synthesized picture, and the starting point position identifier is not found, so as to improve the possibility of accurately analyzing the description information in the picture.
After step S103, step S104 is performed: and outputting the first descriptive information text. Specifically, the first descriptive information text may be output in the form of voice or text.
Further, in order to control the influence range of the first color coding value sequence on the picture quality, after executing step S104, the method further includes the following steps: and determining the number of the replaceable pixels in the target processing picture, judging whether the number of the color coding values of the first color coding value sequence is larger than the number of the replaceable pixels, and if so, outputting prompt information that the number of the color coding values of the first color coding value sequence exceeds the number of the replaceable pixels in the target processing picture. Through this prompt message, the user can be prompted: it is desirable to reduce the number of words of the entered descriptive text so as not to disrupt the picture effect.
Specifically, the prompt information can be presented on the target processing picture in a floating window mode, can be on a candidate interface of the target input method, and can be output in a voice mode.
In a specific implementation process, the implementation manner of determining the number of replaceable pixels of the target processing picture is as follows:
mode one: and obtaining the total number of the pixels of the target processing picture, and determining the product result of the total number of the pixels and a preset proportion value as the number of the replaceable pixels of the target processing picture.
Specifically, the picture size of the target processing picture can be obtained from the picture details of the target processing picture, and the total number of pixels of the target processing picture is calculated according to the picture size of the target processing picture. For example, the picture size of the target processing picture is 2084×1536, and the total number of pixels of the target processing picture=2084×1536= 3201024.
Specifically, the larger the image size of the target processing picture is, the more pixels are, and the larger the corresponding preset proportion value is. The preset scale value may also be a constant value independent of the image size of the target processed picture, for example, the preset scale value may be set to a fixed value in the range of 0.1% to 5%.
Mode two: and determining the total number of the outer boundary pixels of the target processing picture as the number of the replaceable pixels of the target processing picture.
And obtaining the picture size of the target processing picture from the picture details of the target processing picture, and calculating the total number of the outer boundary pixel points of the target processing picture according to the picture size of the target processing picture. For example, if the picture size of the target processing picture is 2084×1536, the total number of outer boundary pixels of the target processing picture=2084+1536=3620.
Furthermore, in order to prevent the descriptive information of the picture from being seen by the receiving user when the picture is shared, the privacy effect of protecting the descriptive information is achieved. Therefore, in this embodiment, after obtaining the synthesized picture in which the first descriptive information text is represented by the first color coding value sequence, the method further includes the steps of:
step S103': and obtaining the sharing operation of sharing the synthesized picture to the target user object by the current login user.
It should be noted that the target user object may be one of the buddy lists of the currently logged-in user. Specifically, the current login user may be a user currently logged in a target input method program installed in the user device, and then the current login user may establish an input method buddy list on the target input method program, where the target user object is one of the input method buddy lists. The current login user can also be a user currently logged in an application program where the target processing picture is located, such as a user currently logged in a photographing application program, a user currently logged in a picture browsing application program, or a user currently logged in a video playing application program, and then the current login user can establish a picture sharing friend list on the application program where the target processing picture is located, and the target user object is one of the picture sharing friend lists.
After step S103', step S104' is performed: and judging whether the target user object is positioned in a preset friend list of the current login user.
It is to be noted that the friends in the preset friend list belong to a part of friends in the friend list of the current login user. In an implementation, the buddy list for the currently logged-on user may be divided into multiple levels. The preset friend list comprises one or more friends with lower levels in the friend list, and the friends can be specifically set by the current login user, and the friends with the levels or the friends with the levels belong to the preset friend list. The friends in the preset friend list are friends for which description information in the invisible chart is needed, so that all friends of a current login user can be divided into friends of the type for description information in the visible chart and friends of the type for description information in the invisible chart through the preset friend names, the description information in the chart is invisible to friends in the preset friend list and visible to other friends, and the effect that the description information in the chart is invisible to friends with insufficient levels is achieved.
If the determination result of step S104 'is not yes, step S105a' is performed: and sending the synthesized picture to opposite terminal equipment where the target user object is located, so that the opposite terminal equipment analyzes a first color coding value sequence in the synthesized picture into a first descriptive information text based on the mapping relation between M unit descriptive contents and color coding values included in the first descriptive information text.
In a specific implementation, S105a' may have various embodiments: for example, the mapping relationship between the unit description content set and the color coding value set existing in the user equipment or in the target input method of the user equipment can be sent to the opposite terminal equipment where the target user object is located. The mapping relation between the M unit descriptive contents of the first descriptive information text and the color coding values can be extracted from the mapping relation and sent to the opposite terminal equipment where the target user object is located, so that the data transmission quantity is reduced.
If the current login user refers to: and when the user logs in the target input method program installed on the user equipment at present, the target input method program is also installed on the opposite terminal equipment, the mapping relation between the unit description content set and the color coding value set is established in the target input method program, the user equipment only sends the synthesized picture to the opposite terminal equipment, and the opposite terminal equipment calls the mapping relation between the unit description content set and the color coding value set in the target input method program installed on the opposite terminal equipment and analyzes the first color coding value sequence in the synthesized picture into a first description information text.
The target user object can be caused to read out the description information implicit in the composite picture by step S105 a'.
If the determination result of step S104 'is yes, step S105b' is performed: generating a transcoding mapping relation according to the mapping relation between the unit descriptive content set and the color coding value set, and sending the synthesized picture and the transcoding mapping relation to opposite terminal equipment where the target user object is located, so that the opposite terminal equipment analyzes a first color coding value sequence in the synthesized picture into text content different from the first descriptive information text based on the transcoding mapping relation.
In step S105b', specifically: generating transcoding mapping relations between M unit descriptive contents and color coding values contained in a first descriptive information text by using the mapping relations between the unit descriptive contents and the color coding value sets and the encryption transcoding rules set by the current login user; and sending the transcoding mapping relation between the M unit descriptive contents and the color coding values contained in the synthesized picture and the first descriptive information text to opposite terminal equipment where the target user object is located, thereby reducing the data transmission quantity.
It should be noted that the encryption transcoding rule may be to add the same coding offset to the color coding value of the M unit descriptive contents, for example, the coding offset may be one of 1,2, 3, 4, 5, 6, and so on. Taking the code offset as '5' as an example, according to the mapping relation between the unit description content set and the color coding value set, the first color coding value sequence corresponding to the 'Qinghai lake' is as follows: {11,32,43}, {51,15,96}, {221,223,1}, and adding the same coding offset "5" to obtain a transcoding mapping relationship, the color coding sequence corresponding to "Qinghai lake" is: {16,37,48}, {56,20,101}, {226,228,6}, so that even if the opposite terminal equipment where the target user object is located obtains a synthesized picture, the true description information cannot be correctly analyzed, and the purpose of privacy protection is achieved.
The encryption transcoding rule may further be to invert color coding values of the M unit descriptive contents to obtain a transcoding mapping relationship, and taking "Qinghai lake" as an example, in this embodiment, the transcoding color coding sequence corresponding to "Qinghai lake" is: {244,223,215},{204,240,159},{34,32,254}.
More preferably, the encryption and transcoding rules in the two embodiments may be alternatively or simultaneously set in the target input method program, and when step S106b' is executed, one encryption and transcoding rule is randomly selected, and encryption and transcoding are performed on the mapping relationship between M unit descriptive contents and color coding values, so as to generate a transcoding mapping relationship for M unit descriptive contents in the first descriptive information text, thereby increasing the difficulty in decoding the descriptive information.
Through the steps S104' to S105a ' and S105b ', the friends sharing the synthesized picture are different, the description information in the presented picture is different, and the real description information can be seen only if the shared friends do not belong to the preset friend list, so that the purpose of protecting the privacy of the picture by the crowd is achieved.
In a further technical scheme, if a sharing operation that a non-login user shares a synthesized picture with a target user object is obtained, sending the synthesized picture and the mapping relation between M unit descriptive contents and color coding values in a first descriptive information text to an opposite terminal device where the target user object is located, so that the opposite terminal device analyzes a first color coding value sequence in the synthesized picture into the first descriptive information text based on the mapping relation between M unit descriptive contents and color coding values. Therefore, for the sharing operation of the non-login user, any user who obtains the synthesized picture can see the description information therein.
Furthermore, the embodiment of the invention also provides an embodiment for adding new description information to the synthesized picture so as to update or increase the description information in the synthesized picture. The specific implementation process is that after step S102, the following steps S103 "to S104" are executed:
step S103": and obtaining a second descriptive information text which comprises P unit descriptive contents and aims at the synthesized picture, determining a second color coding value sequence corresponding to the P unit descriptive contents included in the second descriptive information text according to the mapping relation between the unit descriptive content set and the color coding value set, wherein P is an integer larger than 1.
In the implementation process, the specific implementation of determining the second color coding value sequence corresponding to the P unit descriptions included in the second description information text is the same as or similar to the specific implementation of determining the first color coding value sequence corresponding to the M unit descriptions in step S1021', and for brevity of the description, details are not repeated here.
Step S104': and replacing the first color coding value sequence on the synthesized picture with the second color coding value sequence, or writing the second color coding value sequence in a position where the first color coding sequence on the target processing picture does not exist, so as to generate a new synthesized picture.
In the specific implementation process, if the first color coding value sequence on the synthesized picture is replaced, the specific implementation manner is as follows:
based on the same embodiment as step S1022', the first color-coded value sequence of M outer-boundary pixel points is sequentially replaced with P color-coded values of the second color-coded value sequence clockwise or counterclockwise from the outer-boundary pixel point at the starting-point pixel position of the target processing picture.
If the number of the color coding value groups in the second color coding value sequence is more than the number of the color coding value groups of the first color coding value sequence, the color coding values of the pixel points behind the first color coding value sequence are replaced continuously until all the color coding values of the second color coding value sequence are written.
If the number of sets of color-coded values in the second sequence of color-coded values is less than the number of sets of color-coded values in the first sequence of color-coded values, color-coded values in the first sequence of color-coded values that are not replaced by the second sequence of color-coded values are padded with specific color-coded values that are not in the mapping relationship between the set of unit-descriptive content and the set of color-coded values, e.g., {0, 0}.
Through the steps S103 'to S104', the modifiable description information in the picture is realized, so that the description information can be rewritten in a covering manner at any time.
If the second color coded value sequence is written in a position where the first color coded sequence does not exist on the target processing picture, the addition of the description information is realized without affecting the original description information in the picture.
It should be noted that, whether to add or replace the first color-coded value sequence with the second color-coded value sequence may be implemented in an alternative manner, or may be different according to the second color-coded value sequence, specifically:
after step S103", judging whether the second descriptive information text and the first descriptive information text belong to the same information type; if so, replacing the first color coded value sequence on the synthesized picture with the second color coded value sequence, otherwise, writing the second color coded value sequence at a position on the target processing picture which does not belong to the first color coded sequence.
The information type of the description information may be picture content, time information, location information, etc., and if the first color-coded value sequence and the second color-coded value sequence are identical to the picture content, the location information, or the time information, the first color-coded value sequence on the synthesized picture is replaced with the second color-coded value sequence.
Through steps S2021 'to 2022', the picture carries the description information through the color coding values without writing real characters on the picture, and since the color coding values on the picture represent the description information only occupy a very small number of pixels, the change of the display effect of the picture can not be ignored visually, and the display quality of the picture can not be destroyed.
Based on the same inventive concept, the embodiment of the invention provides a method for processing pictures based on an input method, which comprises the following steps: in this embodiment, reference may be made to the description in the foregoing embodiment of the image processing method for specific implementation details of the method for processing an image based on the input method.
Based on the same inventive concept, an embodiment of the present invention provides a picture processing apparatus, as shown in fig. 5, including the following:
a descriptive information text acquisition unit 501 for acquiring a first descriptive information text for a target processing picture;
the synthesis processing unit 502 is configured to perform synthesis processing on the first descriptive information text and the target processing picture, so as to obtain a synthesized picture carrying the first descriptive information text, where the first descriptive information text carried by the synthesized picture does not obstruct the main image area of the target processing picture.
In a specific embodiment, the first descriptive information text does not obstruct the main image area of the target processing picture, specifically:
the first descriptive information text is positioned outside the main image area of the target processing picture, or
The first descriptive information text is implicit in the main image area of the target processing picture.
In a specific embodiment, the synthesis processing unit 502 includes:
a region determining subunit, configured to determine a main image region of the target processing picture;
and the superposition processing subunit is used for superposing the first descriptive information text to an area except for the main image area in the target processing picture.
In a specific embodiment, the main image area determining subunit is specifically configured to:
determining user characteristic information of a first user inputting a first descriptive text;
and determining a main image area on the target processing picture according to the user characteristic information of the first user.
In a specific embodiment, the method further comprises:
a first feature information determining unit configured to determine user feature information of a second user currently viewing the composite picture;
a new region determining unit, configured to determine a new main image region on the target processing picture according to user feature information of the second user;
And the first display unit is used for presenting the target processing picture and displaying the first descriptive information text in an area except the new main image area.
In a specific embodiment, the method further comprises:
a second feature information determining unit for determining user feature information of a third user currently viewing the composite picture;
and the second display unit is used for presenting the target processing picture and displaying the first descriptive information text in an area outside the main image area according to the user characteristic information of the third user.
In a specific embodiment, the second display unit is specifically configured to:
determining font information for the first descriptive information text according to user characteristic information of a third user;
and rendering the target processing picture, and displaying the first descriptive information text in an area outside the main image area by using the font information.
In a specific embodiment, the second display unit is further specifically configured to:
determining a sub-descriptive information text for current display in the first descriptive information text according to user characteristic information of a third user;
and presenting the target processing picture, and displaying the current sub-descriptive information text for display in an area outside the main image area.
In a specific embodiment, the synthesis processing unit includes:
a conversion subunit, configured to convert the first description information text into a first color coding value sequence;
and the writing subunit is used for writing a first color coding value sequence representing a first descriptive information text in the target processing picture to obtain a synthesized picture carrying the first color coding value sequence, wherein the first descriptive information text is not displayed on the target processing picture.
In a specific embodiment, the first descriptive information text includes M units of descriptive content, M being an integer greater than or equal to 1;
the conversion subunit is specifically configured to: determining a first color coding value sequence corresponding to M unit descriptive contents according to the mapping relation between the unit descriptive content sets and the color coding value sets, wherein the first color coding value sequence comprises M color coding values;
a write subunit, in particular for: and writing a first color coding value sequence on the target processing picture to obtain a synthesized picture which represents the first descriptive information text by the first color coding value sequence.
In a specific embodiment, the method further comprises:
the instruction acquisition module is used for acquiring an operation instruction for opening the synthesized picture;
The color coding analysis module is used for analyzing a first color coding value sequence in the synthesized picture according to the mapping relation between the unit descriptive content set and the color coding value set to obtain a first descriptive information text comprising M unit descriptive contents;
and the text output module is used for outputting the first descriptive information text.
In a specific embodiment, the mapping relationship is pre-established in the target input method program;
the descriptive information text acquisition unit is specifically configured to:
starting a target input method program;
acquiring a first descriptive information text input by a user through a target input method program;
determining a first color coding value sequence corresponding to M unit descriptive contents according to a mapping relation between the unit descriptive content sets and the color coding value sets, wherein the first color coding value sequence comprises:
and calling a mapping relation pre-established in the target input method program through the target input method program, and determining a first color coding value sequence corresponding to the M unit descriptive contents.
In a specific embodiment, the color code parsing module is specifically configured to:
and calling a mapping relation pre-established in the target input method program through the target input method program, and analyzing a first color coding value sequence in the synthesized picture to obtain a first descriptive information text.
In a specific embodiment, the conversion subunit comprises:
and the color coding substitution unit is used for sequentially substituting the primary color coding values of the M outer boundary pixel points of the target processing picture with M color coding values of the first color coding value sequence.
In a specific embodiment, the method further comprises:
the front separator setting module is used for replacing primary color coding values of N continuous outer boundary pixel points before the M outer boundary pixel points with first separator coding values, wherein N is an integer greater than 1;
and the post-separator setting module is used for replacing primary color coding values of K continuous outer boundary pixel points after the M outer boundary pixel points with second separator coding values, wherein K is an integer greater than 1.
In a specific embodiment, the color-coded replacement unit is specifically configured to:
setting a starting point pixel position in an outer boundary area of a target processing picture in advance;
adding a start position identifier at the start pixel position, wherein the start position identifier is used for representing pixel point related information requiring replacement of the color coding value after the start pixel position;
and sequentially replacing the primary color coding values of the M outer boundary pixel points behind the starting pixel position with M color coding values of the first color coding value sequence.
In a specific embodiment, the color code parsing module includes:
the separator position determining unit is used for analyzing the synthesized picture and determining the position of the first separator coding value and the position of the second separator coding value from the synthesized picture;
a first sequence determining unit, configured to determine color coding values of sequential pixel points between a position where the first separator coding value is located and a position where the second separator coding value is located as a first color coding value sequence;
the first analysis text unit is used for analyzing the first color coding value sequence into a first descriptive information text according to the mapping relation.
In a specific embodiment, the color code parsing module includes:
the identification analysis unit is used for analyzing a starting point position identification from a starting point pixel position of the synthesized picture;
a second sequence determining unit, configured to determine, according to the start position identifier, color coding values of M outer boundary pixel points after the start pixel position as a first color coding value sequence;
and the second analysis text unit is used for analyzing the first color coding value sequence into a first descriptive information text according to the mapping relation.
In a specific embodiment, the color code parsing module includes:
The jumping pixel point searching unit is used for searching a first jumping pixel point and a second jumping pixel point from the outer boundary area of the synthesized picture;
a third sequence determining unit, configured to determine color coding values of sequential pixel points between a position where the first jumping pixel point is located and a position where the second jumping pixel point is located as a first color coding value sequence;
and the third analysis text unit is used for analyzing the first color coding value sequence into a first descriptive information text according to the mapping relation.
In a specific embodiment, the method further comprises:
the pixel point number determining module is used for determining the number of replaceable pixel points in the target processing picture;
the judging module is used for judging whether the number of the color coding values of the first color coding value sequence is larger than the number of the replaceable pixel points or not;
and the prompt information output module is used for outputting the prompt information that the number of the color coding values of the first color coding value sequence exceeds the number of the replaceable pixel points in the target processing picture if the judgment result of the judgment module is yes.
In a specific embodiment, the pixel number determining module is specifically configured to:
acquiring the total number of pixel points of the target processing picture, and determining the product result of the total number of pixel points and a preset proportion value as the number of replaceable pixel points of the target processing picture; or alternatively
And determining the total number of the outer boundary pixels of the target processing picture as the number of the replaceable pixels of the target processing picture.
In a specific embodiment, the apparatus further comprises:
the monitoring unit is used for monitoring whether a picture browsing event and/or a photographing behavior event exist at present;
a browsing picture determining unit, configured to determine a current browsing picture of a picture browsing event as a target processing picture if the picture browsing event currently exists;
and the photographing picture determining unit is used for determining the current photographing picture of the photographing behavior event as a target processing picture if the photographing behavior event is monitored to exist currently.
In a specific embodiment, the method further comprises:
the operation obtaining module is used for obtaining the sharing operation of sharing the synthesized picture to the target user object by the current login user;
the friend judging module is used for judging whether the target user object belongs to a preset friend list of the current login user or not;
the picture sending unit is used for generating a transcoding mapping relation according to the mapping relation if the judging result of the friend judging module is yes, sending a synthesized picture and the transcoding mapping relation to opposite terminal equipment where the target user object is located, so that the opposite terminal equipment analyzes a first color coding value sequence in the synthesized picture into text content different from the first descriptive information text based on the transcoding mapping relation; if the judging result of the friend judging module is not yes, sending a synthesized picture to opposite terminal equipment where the target user object is located, so that the opposite terminal equipment analyzes a first color coding value sequence in the synthesized picture into a first descriptive information text based on a mapping relation.
In a specific embodiment, the mapping relationship between the unit description content set and the color coding value set specifically includes:
each character in the character set and each color coding value in the color coding value set meet a one-to-one mapping relation; or alternatively
Each word in the word set and each color coding value in the color coding value set meet a one-to-one mapping relation; or alternatively
Each phrase in the phrase set and each color coding value in the color coding value set meet a one-to-one mapping relation; or alternatively
The one-to-one mapping relation between each sentence in the sentence set and each color coded value in the color coded value set is satisfied.
In a specific embodiment, the method further comprises:
a text obtaining module for obtaining a second descriptive information text including P units of descriptive content for the composite picture;
the code determining module is used for determining a second color code value sequence corresponding to P unit descriptive contents included in the second descriptive information text according to the mapping relation, wherein P is an integer greater than 1;
and the code writing module is used for replacing the first color code value sequence on the synthesized picture with the second color code value sequence or writing the second color code value sequence at a position which does not belong to the first color code sequence on the target processing picture to generate a new synthesized picture.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 6 is a block diagram illustrating an apparatus 600 for a picture processing method according to an exemplary embodiment. For example, device 600 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 6, device 600 may include one or more of the following components: a processing component 602, a memory 604, a power component 606, a multimedia component 608, an audio component 610, an input/output (I/O) interface 612, a sensor component 614, and a communication component 616.
The processing component 602 generally controls overall operation of the device 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing element 602 may include one or more processors 620 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 may include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is configured to store various types of data to support operations at the device 600. Examples of such data include instructions for any application or method operating on device 600, contact data, phonebook data, messages, pictures, videos, and the like. The memory 604 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power component 606 provides power to the various components of the device 600. Power component 606 can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 600.
The multimedia component 608 includes a screen between the device 600 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 608 includes a front camera and/or a rear camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 600 is in an operational mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 610 is configured to output and/or input audio signals. For example, the audio component 610 includes a Microphone (MIC) configured to receive external audio signals when the device 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 604 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 614 includes one or more sensors for providing status assessment of various aspects of the device 600. For example, the sensor assembly 614 may detect the on/off state of the device 600, the relative positioning of the components, such as the display and keypad of the device 600, the sensor assembly 614 may also detect a change in position of the device 600 or a component of the device 600, the presence or absence of user contact with the device 600, the orientation or acceleration/deceleration of the device 600, and a change in temperature of the device 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is configured to facilitate communication between the device 600 and other devices, either wired or wireless. The device 600 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In one exemplary embodiment, the communication part 616 receives a broadcast signal or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 604, including instructions executable by processor 620 of device 600 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
There is also provided a non-transitory computer readable storage medium, which when executed by a processor of a mobile terminal, causes the apparatus 600 to perform a picture processing method comprising any one of the implementations of the picture processing method embodiments described above.
Based on the same inventive concept, the embodiment of the present invention provides an input method system, which includes any implementation manner of the foregoing embodiment of the image processing apparatus, and specific implementation details refer to descriptions in the foregoing embodiment of the image processing apparatus.
The one or more technical schemes provided by the embodiment of the invention at least realize the following technical effects or advantages:
since first description information comprising M unit description contents is obtained for a target processing picture, a first color coding value sequence corresponding to the M unit description contents is determined according to a mapping relation between a unit description content set and a color coding value set; and writing a first color coding value sequence on the target processing picture to obtain a synthesized picture which represents the first descriptive information text by the first color coding value sequence. Therefore, the picture carries the description information through the color coding value without writing real words on the picture, and the color coding value on the picture only occupies a very small number of pixels to represent the description information, so that the change of the display effect of the picture can not be ignored visually, and the display quality of the picture can not be destroyed.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It is to be understood that the invention is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present invention is to be limited only by the following claims, which are set forth herein as illustrative only and not by way of limitation, and any such modifications, equivalents, improvements, etc., which fall within the spirit and principles of the present invention, are intended to be included within the scope of the present invention.

Claims (10)

1. A picture processing method, comprising:
acquiring a first descriptive information text aiming at a target processing picture;
Synthesizing the first descriptive information text and the target processing picture to obtain a synthesized picture carrying the first descriptive information text, wherein the first descriptive information text carried by the synthesized picture does not shade a main image area of the target processing picture, and the method comprises the following steps: writing a first color coding value sequence representing the first descriptive information text in the target processing picture to obtain a synthesized picture carrying the first color coding value sequence, wherein the first descriptive information text is not displayed on the target processing picture;
wherein writing the first color coded value sequence on the target processed picture comprises: sequentially replacing primary color coding values of M outer boundary pixel points of the target processing picture with M color coding values of the first color coding value sequence; replacing primary color coding values of continuous N outer boundary pixel points before the M outer boundary pixel points with first separator coding values, wherein N is an integer greater than or equal to 1; the primary color coding values of the continuous K outer boundary pixel points after the M outer boundary pixel points are replaced by second separator coding values, and K is an integer greater than or equal to 1.
2. The picture processing method according to claim 1, wherein the first descriptive information text does not obstruct a main image area of the target processing picture, specifically:
the first descriptive information text is positioned outside the main image area of the target processing picture, or
The first descriptive information text is implicit in a main image area of the target processing picture.
3. The method for processing a picture according to claim 1 or 2, wherein the synthesizing the first descriptive information text and the target processing picture to obtain a synthesized picture carrying the first descriptive information text, the first descriptive information text carried by the synthesized picture does not obstruct a main image area of the target processing picture, includes:
determining a main image area of the target processing picture;
and superposing the first descriptive information text to an area outside the main image area in the target processing picture.
4. A picture processing method as claimed in claim 3, wherein said determining a main image area of the target processed picture comprises:
determining user characteristic information of a first user inputting the first descriptive information text;
And determining the main image area on the target processing picture according to the user characteristic information of the first user.
5. The picture processing method as claimed in claim 4, further comprising, after said text superimposing the first description information on an area other than the main image area in the target processing picture:
determining user characteristic information of a second user currently viewing the composite picture;
determining a new main image area on the target processing picture according to the user characteristic information of the second user;
and presenting the target processing picture, and displaying the first descriptive information text in an area outside the new main image area.
6. A method for processing a picture based on an input method, comprising:
starting a target input method program, wherein a mapping relation between a unit description content set and a color coding value set is pre-established in the target input method program;
performing the method of any of claims 1-5 based on the target input method program.
7. A picture processing apparatus, characterized by comprising:
the descriptive information text acquisition unit is used for acquiring a first descriptive information text aiming at the target processing picture;
The synthesis processing unit is configured to perform synthesis processing on the first descriptive information text and the target processing picture, obtain a synthesized picture carrying the first descriptive information text, where the first descriptive information text carried by the synthesized picture does not block a main image area of the target processing picture, and include: writing a first color coding value sequence representing the first descriptive information text in the target processing picture to obtain a synthesized picture carrying the first color coding value sequence, wherein the first descriptive information text is not displayed on the target processing picture;
wherein writing the first color coded value sequence on the target processed picture comprises: sequentially replacing primary color coding values of M outer boundary pixel points of the target processing picture with M color coding values of the first color coding value sequence; replacing primary color coding values of continuous N outer boundary pixel points before the M outer boundary pixel points with first separator coding values, wherein N is an integer greater than or equal to 1; the primary color coding values of the continuous K outer boundary pixel points after the M outer boundary pixel points are replaced by second separator coding values, and K is an integer greater than or equal to 1.
8. An input method system comprising the picture processing apparatus of claim 7.
9. A computer readable storage medium having stored thereon a computer program, which when executed by a processor implements the method of any of claims 1-5.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of claims 1-5 when the program is executed by the processor.
CN201810595824.6A 2018-06-11 2018-06-11 Picture processing method and device Active CN110580730B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810595824.6A CN110580730B (en) 2018-06-11 2018-06-11 Picture processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810595824.6A CN110580730B (en) 2018-06-11 2018-06-11 Picture processing method and device

Publications (2)

Publication Number Publication Date
CN110580730A CN110580730A (en) 2019-12-17
CN110580730B true CN110580730B (en) 2024-03-26

Family

ID=68809244

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810595824.6A Active CN110580730B (en) 2018-06-11 2018-06-11 Picture processing method and device

Country Status (1)

Country Link
CN (1) CN110580730B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111158817A (en) * 2019-12-24 2020-05-15 维沃移动通信有限公司 Information processing method and electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101359360A (en) * 2008-07-31 2009-02-04 刘旭 Graphics context fused electronic ticket coding/decoding method
WO2014140770A1 (en) * 2013-03-15 2014-09-18 Send Only Oked Documents (Sood) Method for watermarking the text portion of a document
CN106127837A (en) * 2015-05-07 2016-11-16 顶漫画股份有限公司 The multi-language support system of network caricature
CN106844659A (en) * 2017-01-23 2017-06-13 宇龙计算机通信科技(深圳)有限公司 A kind of multimedia data processing method and device
CN107358227A (en) * 2017-06-29 2017-11-17 努比亚技术有限公司 A kind of mark recognition method, mobile terminal and computer-readable recording medium
CN107622496A (en) * 2017-09-11 2018-01-23 广东欧珀移动通信有限公司 Image processing method and device
WO2018019124A1 (en) * 2016-07-29 2018-02-01 努比亚技术有限公司 Image processing method and electronic device and storage medium
CN107766349A (en) * 2016-08-16 2018-03-06 阿里巴巴集团控股有限公司 A kind of method, apparatus, equipment and client for generating text
CN107907803A (en) * 2017-11-23 2018-04-13 南京杰迈视讯科技有限公司 A kind of portable augmented reality ultraviolet imagery system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090006979A1 (en) * 2007-06-27 2009-01-01 International Business Machines Corporation Text exchange facility for joining multiple text exchange communications into a single topic based communication
US8630200B2 (en) * 2010-06-01 2014-01-14 Meltwater News International Holdings, GmbH Method and apparatus for embedding information in a short URL

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101359360A (en) * 2008-07-31 2009-02-04 刘旭 Graphics context fused electronic ticket coding/decoding method
WO2014140770A1 (en) * 2013-03-15 2014-09-18 Send Only Oked Documents (Sood) Method for watermarking the text portion of a document
CN106127837A (en) * 2015-05-07 2016-11-16 顶漫画股份有限公司 The multi-language support system of network caricature
WO2018019124A1 (en) * 2016-07-29 2018-02-01 努比亚技术有限公司 Image processing method and electronic device and storage medium
CN107766349A (en) * 2016-08-16 2018-03-06 阿里巴巴集团控股有限公司 A kind of method, apparatus, equipment and client for generating text
CN106844659A (en) * 2017-01-23 2017-06-13 宇龙计算机通信科技(深圳)有限公司 A kind of multimedia data processing method and device
CN107358227A (en) * 2017-06-29 2017-11-17 努比亚技术有限公司 A kind of mark recognition method, mobile terminal and computer-readable recording medium
CN107622496A (en) * 2017-09-11 2018-01-23 广东欧珀移动通信有限公司 Image processing method and device
CN107907803A (en) * 2017-11-23 2018-04-13 南京杰迈视讯科技有限公司 A kind of portable augmented reality ultraviolet imagery system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
应用于LCD控制器帧缓冲器中图像映射编码算法的研究;王鑫;雷铭;邹雪城;;计算机与数字工程;20110420(04期);全文 *

Also Published As

Publication number Publication date
CN110580730A (en) 2019-12-17

Similar Documents

Publication Publication Date Title
US8977983B2 (en) Text entry method and display apparatus using the same
CN109257645B (en) Video cover generation method and device
CN109005283B (en) Method, device, terminal and storage medium for displaying notification message
CN110121093A (en) The searching method and device of target object in video
CN104866323B (en) Unlocking interface generation method and device and electronic equipment
CN104918107B (en) The identification processing method and device of video file
CN109408652B (en) Picture searching method, device and equipment
CN103141085A (en) Information processing device and information processing method
CN108924644B (en) Video clip extraction method and device
CN107220614B (en) Image recognition method, image recognition device and computer-readable storage medium
CN111988672A (en) Video processing method and device, electronic equipment and storage medium
CN111797262A (en) Poetry generation method and device, electronic equipment and storage medium
CN111338743B (en) Interface processing method and device and storage medium
CN113742025A (en) Page generation method, device, equipment and storage medium
CN110580730B (en) Picture processing method and device
CN109756783B (en) Poster generation method and device
CN107527072B (en) Method and device for determining similar head portrait and electronic equipment
CN110650364B (en) Video attitude tag extraction method and video-based interaction method
CN112163200A (en) Picture processing method and device and electronic equipment
CN107247692B (en) Information processing method and device
US20150156442A1 (en) Display device and operating method thereof
US20180267600A1 (en) Method and device for controlling virtual reality helmets
CN113703881B (en) Display method, device and storage medium
CN113177098B (en) Map data display method and device
CN115086747A (en) Information processing method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant