CN111142821B - Processing method and device, electronic equipment and output equipment - Google Patents

Processing method and device, electronic equipment and output equipment Download PDF

Info

Publication number
CN111142821B
CN111142821B CN201911366622.5A CN201911366622A CN111142821B CN 111142821 B CN111142821 B CN 111142821B CN 201911366622 A CN201911366622 A CN 201911366622A CN 111142821 B CN111142821 B CN 111142821B
Authority
CN
China
Prior art keywords
image
output
processing condition
adjusting
condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911366622.5A
Other languages
Chinese (zh)
Other versions
CN111142821A (en
Inventor
董芳菲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201911366622.5A priority Critical patent/CN111142821B/en
Publication of CN111142821A publication Critical patent/CN111142821A/en
Application granted granted Critical
Publication of CN111142821B publication Critical patent/CN111142821B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/147Digital output to display device ; Cooperation and interconnection of the display device with other functional units using display panels

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a processing method, a processing device, electronic equipment and output equipment, wherein the method comprises the following steps: detecting whether a second image and a first image meet a processing condition in a case of outputting the first image; in case it is detected that there is a second image that satisfies the processing condition with the first image, adjusting the first image and/or the second image such that the adjusted first image has a visual association with the second image. Therefore, in the application, under the condition that the second image and the first image are detected to meet the processing condition, the first image and/or the second image are/is adjusted, and then the two adjusted images have visual association, so that the visual experience of the image for viewing can be enriched.

Description

Processing method and device, electronic equipment and output equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a processing method and apparatus, an electronic device, and an output device.
Background
At present, when a plurality of electronic images are displayed in a row, the images are usually displayed as independent static images, so that the display effect is single, and the viewing experience of a user is influenced.
Disclosure of Invention
In view of the above, the present application provides a processing method, an apparatus, an electronic device and an output device, including:
a method of processing, comprising:
detecting whether a second image and a first image meet a processing condition in a case of outputting the first image;
in case it is detected that there is a second image that satisfies the processing condition with the first image, adjusting the first image and/or the second image such that the adjusted first image has a visual association with the second image.
In the method, it is preferable that the second image and the first image satisfy a processing condition, and the method includes:
the device distance between the second output device where the second image is located and the first output device where the first image is located meets the processing condition;
or the like, or, alternatively,
the region distance between the second output region of the second image on the target output device and the first output region of the first image on the target output device satisfies a processing condition.
In the method, it is preferable that the second image and the first image satisfy a processing condition, and the method includes:
the first image has a first object therein, the second image has a second object therein, and a correspondence between the first object and the second object satisfies a processing condition.
The method preferably, the adjusting the first image and/or the second image includes:
adjusting at least one image display parameter of the first image and/or the second image such that the image display parameters of the first image and the second image match.
The method preferably, the adjusting the first image and/or the second image includes:
obtaining a first object in the first image and a second object in the second image, wherein the object similarity between the first object and the second object is higher than a similarity threshold;
and adjusting the object display parameters of the first object and/or the second object to be matched.
The method preferably, the adjusting the first image and/or the second image includes:
obtaining a first target object and/or a second target object, wherein the first target object and the second target object correspond to an incidence relation between the first image and the second image;
outputting the first target object in the first image in a first output mode and/or outputting the second target object in the second image in a second output mode;
and the first output mode of the first target object is matched with the scene parameters of the first image, and the second output mode of the second target object is matched with the scene parameters of the second image.
The method preferably, the adjusting the first image and/or the second image includes:
replacing the first image with a third image for outputting, and/or replacing the second image with a fourth image for outputting;
wherein the third image is associated with the first image with respect to a first object and the object pose of the first object in the third image is different from the object pose of the first object in the first image; the fourth image is associated with the second image with respect to a second object, and an object pose of the second object in the fourth image is different from an object pose of the second object in the second image;
and the change information of the object posture of the first object in the third image relative to the first object in the first image corresponds to the object relation between the first object and the second object; the change information of the object pose of the second object in the fourth image relative to the second object in the third image corresponds to the object relationship between the first object and the second object.
A processing apparatus, comprising:
a detection unit configured to detect whether or not a second image satisfying a processing condition with a first image exists in a case where the first image is output;
a control unit, configured to, in a case where it is detected that a second image and the first image satisfy the processing condition, adjust the first image and/or the second image such that the adjusted first image and the second image have a visual association.
An electronic device, comprising:
the memory is used for storing the application program and data generated by the running of the application program;
a processor for detecting whether there is an output second image satisfying a processing condition with a first image in a case where the first image is output; in case it is detected that there is a second image that satisfies the processing condition with the first image, adjusting the first image and/or the second image such that the adjusted first image has a visual association with the second image.
An output device, comprising:
a display for outputting a first image;
the controller is used for detecting whether a second image output by other equipment and the first image meet the processing condition, adjusting the first image when detecting that the second image output by other equipment and the first image meet the processing condition, and/or triggering the other equipment to adjust the second image so that the adjusted first image and the adjusted second image have visual association.
According to the technical scheme, the processing method, the processing device, the electronic equipment and the output equipment provided by the application adjust the first image and/or the second image under the condition that the second image and the first image are detected to meet the processing condition, so that the two adjusted images have visual association, and therefore the visual experience of the image for viewing can be enriched.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flowchart of a processing method according to an embodiment of the present disclosure;
2-3 are diagrams of two images output examples in the embodiment of the present application;
FIGS. 4-7 are schematic diagrams illustrating that two images satisfy the processing condition in the embodiment of the present application;
8-24 are exemplary diagrams of two images adjusted to achieve visual association according to embodiments of the application;
fig. 25 is a schematic structural diagram of a processing apparatus according to a second embodiment of the present application;
fig. 26 is a schematic structural diagram of an electronic device according to a third embodiment of the present application;
fig. 27 is a schematic structural diagram of an output device according to a fourth embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, a flowchart of an implementation of a processing method provided in an embodiment of the present application is shown, where the method is applied to an electronic device capable of performing image processing and image output, such as a mobile phone, a pad, or an electronic screen. The method in the embodiment of the application is mainly used for adjusting the two images meeting the processing condition, so that the two adjusted images have visual association, and the visual experience of a user for watching the images is enriched.
Specifically, the method in this embodiment may include the following steps:
step 101: in a case where the first image is output, it is detected whether or not the second image and the first image satisfy the processing condition.
In this embodiment, the condition that the first image is output is a condition that the first image is output on the first output device, and at this time, it is detected whether a second image is output on the second output device and whether a processing condition is satisfied between the second image output on the second output device and the first image output on the first output device in this embodiment, as shown in fig. 2;
alternatively, the case of outputting the first image in this embodiment means: in the case where the first image is output in the first output region on the specific target output device, in this embodiment, it is detected whether or not there is a second image output in the second output region on the target output device, and whether or not a condition is satisfied between the second image output in the second output region and the first image output in the first output region, as shown in fig. 3.
It should be noted that the first output device, the second output device, and the target output device in this embodiment may all be devices capable of outputting images, such as a mobile phone, a pad, or an electronic screen.
Step 102: in case it is detected that there is a second image that satisfies the processing condition with the first image, the first image and/or the second image is adjusted such that the adjusted first image has a visual association with the second image.
In this embodiment, the fact that the second image and the first image satisfy the processing condition may be understood as: when the processing condition is satisfied, the first image is adjusted, or the second image is adjusted, or the first image and the second image are adjusted at the same time, so that the two adjusted images have visual effects that are visually associated with a user who can view the images, such as the same object in the image content, dynamic change between the images in a similar manner, or scene correlation of the images.
As can be seen from the foregoing solution, in the processing method provided in the first embodiment of the present application, when it is detected that the processing condition is satisfied between the second image and the first image, the first image and/or the second image are/is adjusted, so that the two adjusted images have a visual association, thereby enriching the visual experience of the viewing image.
In one implementation, the second image and the first image satisfying the processing condition in step 101 may be specifically understood as that the distance between the two images satisfies the processing condition, and may be specifically classified into the following cases:
in one case, the second image and the first image satisfying the processing condition means: and the device distance between the second output device where the second image is located and the first output device where the first image is located meets the processing condition, for example, the device distance is smaller than the first distance threshold value. As shown in fig. 4, the distance between the electronic picture frame a outputting the second image and the electronic picture frame B outputting the second image may be variable, and when the user moves or moves the electronic picture frame a and/or the electronic picture frame B in other manners so that the distance between the two electronic picture frames is less than 10 centimeters (for example, the distance between the two electronic picture frames is determined to be less than 10 centimeters by using a positioning apparatus of the electronic picture frame or performing image acquisition and recognition on the electronic picture frame, etc.), it is determined that the processing condition is satisfied between the second image and the first image in this embodiment; or, as shown in fig. 5, the distance between the electronic photo frame a and the electronic photo frame B is fixed and unchanged, and the distance between the electronic photo frame a and the electronic photo frame B is less than 10 centimeters, when the electronic photo frame B outputs the second image again under the condition that the first image is output in the electronic photo frame a, it is detected in this embodiment that the device distance between the electronic photo frame B where the second image is located and the electronic photo frame a where the first image is located satisfies the processing condition, and at this time, it is determined that the second image and the first image satisfy the processing condition.
Or, in another case, the second image and the first image satisfying the processing condition specifically means:
the region distance between the second output region of the second image on the target output device and the first output region of the first image on the target output device satisfies a processing condition, such as the region distance being less than a second distance threshold. As shown in FIG. 6, in the same electronic picture frame A, there are two output areas: in the case where the first output region a1 and the second output region a2, the first output region a1 and the second output region a2, whose region positions in the electronic picture frame a are variable, have a first image output in the first output region a1 and a second image output in the second output region a2, and where the user moves or otherwise moves the region positions of the two output regions in the electronic picture frame a so that the distance between the two output regions is less than 3 cm, it is determined in this embodiment that the processing condition is satisfied between the second image and the first image; alternatively, as shown in fig. 7, when the distance between the first output area a1 and the second output area a2 in the electronic picture frame a is constant and the distance between the first output area a1 and the second output area a2 is less than 3 cm and the second output area a2 outputs the second image again in the case where the first image is output in the first output area a1, the area distance between the second output area a2 where the second image is detected and the first output area a1 where the first image is detected satisfies the processing condition in this embodiment, and at this time, it is determined that the second image and the first image satisfy the processing condition.
In another implementation manner, the condition that the second image and the first image satisfy the processing condition in step 101 may be specifically understood as: the second image and the first image satisfy a processing condition in a dimension of the image content, for example:
the first image has a first object therein, and the second image has a second object therein, and the correspondence between the first object and the second object at this time satisfies the processing condition.
Here, the correspondence between the first object and the second object may refer to: the first object and the second object are related to a relationship with completely the same or approximately the same attribute or parameter, such as an approximate or completely consistent relationship of objects such as a blue sky or a building, or the first object and the second object are related to a logical relationship of objects according to a scene, such as a man or a woman, a food chain or a long or a young object, or the first object and the second object are related to a logical relationship according to a business processing flow under a scene, such as a logical relationship of meals after meals are made or a logical relationship of an airplane taking off after the airplane boarding, and the like.
It should be noted that, in this embodiment, at least one first object in the first image and at least one second object in the second image may be obtained by performing image recognition on the first image and the second image, and accordingly, the meeting of the processing conditions by the first image and the second image means: the existence of the correspondence between the one or more first objects in the first image and the one or more second objects in the second image satisfies the processing condition, that is: the first object in the first image is at least approximately the same as or satisfies a certain scene logical relationship with the second object in the second image.
In one implementation, the adjusting of the first image and/or the second image in step 102 may be implemented by:
at least one image display parameter of the first image and/or the second image is adjusted so that the image display parameters of the first image and the second image match.
The image display parameters in this embodiment may include: any one or any combination of multiple parameters of pixel value, display direction (horizontal or vertical), hue, transparency, saturation, and the like, and accordingly, in this embodiment, at least one image display parameter of the first image, or at least one image display parameter of the second image, or at least one image display parameter of the first image and at least one image display parameter of the second image are adjusted at the same time, so that the adjusted first image and the adjusted second image are matched on the corresponding image display parameters, such as the transparency is the same or the hue is similar.
For example, in this embodiment, when it is detected that the device distance between the electronic photo frame a and the electronic photo frame B is less than 10 centimeters, the image tone of the first image in the electronic photo frame a is the same as the image tone of the second image in the electronic photo frame B, and both are image display parameters of a cool tone, and at this time, when the user watches the images in the two electronic photo frames, a situation that the color tones are disordered in the user visual range is avoided, so as to improve the visual experience of the user, as shown in fig. 8;
or, in this embodiment, when it is detected that the two image regions in the electronic photo frame respectively output the first image and the second image and the first image and the second image both have a blue-sky scene object, in this embodiment, the pixel values of the image regions of the blue-sky object in the first image and the second image are adjusted to be gray levels, so as to highlight other scene objects in the first image and the second image, at this time, when the user watches the two images in the electronic photo frame, the user can more intuitively view the image regions of other scene objects except the blue-sky, thereby bringing a richer viewing experience to the user, as shown in fig. 9;
or, in this embodiment, when it is detected that the first image and the second image are output on the electronic picture frame a and the electronic picture frame B with the device distance less than 10 centimeters, the image pixel value gradient mode of the second image in the electronic picture frame B may be adjusted according to the image pixel value gradient mode of the first image in the electronic picture frame a, so that the pixel value gradient modes of the images output by the two electronic picture frames with close positions are kept consistent, and thus, the viewing experience of viewing an ultra-long image by a user can be brought through the two smaller images, as shown in fig. 10.
Or, in this embodiment, when it is detected that the first image and the second image are output on the electronic picture frame a and the electronic picture frame B with the device distance less than 10 centimeters, the second image in the electronic picture frame B may be adjusted to be displayed horizontally from the original vertical display according to the image display direction of the first image in the electronic picture frame a, so that the images output by the two electronic picture frames with the close positions are both output in the horizontal display mode, and thus, a comfortable image viewing experience can be brought to a user by adjusting the display direction of the images, as shown in fig. 11.
In another implementation manner, when the first image and/or the second image is adjusted in step 102, the following may be implemented:
first, in the present embodiment, a first object in a first image and a second object in a second image may be obtained through an algorithm such as image recognition, where the obtained similarity of the objects between the first object and the second object is higher than a similarity threshold, that is, in the present embodiment, an object whose similarity is higher than the similarity threshold in each of the first image and the second image is obtained. For example, in the present embodiment, the first image in the electronic picture frame a is subjected to image recognition, and the objects "small black x 1" and "small white" are recognized in the first image, and the second image in the electronic picture frame B is subjected to image recognition, and the objects "small yellow" and "small black x 2" are recognized in the second image, as shown in fig. 12, the similarity between the first object "small black x 1" and the second object "small black x 2" is higher than 90%;
then, in this embodiment, the object display parameters of the first object and/or the second object may be adjusted to match. In this embodiment, the object display parameters of the first object and the second object may be adjusted by adjusting the object display parameters of the first object, or adjusting the object display parameters of the second object, or adjusting the object display parameters of the first object and the second object simultaneously, so that the object display parameters of the first object and the second object are matched, such as completely consistent or approximately consistent. As shown in fig. 13, in this embodiment, the object outlines of the first object "small black x 1" and the second object "small black x 2" may be set as display parameters with bold dashed lines, and the luminance display parameter of the object area may be set as highlight or luminance flicker, so that after the first image and the second image are adjusted, "small black x 1" and "small black x 2" are presented to the user in a more intuitive manner, so that the user can intuitively view the small black of the two images, and the visual experience of the user in viewing the images is enriched.
Based on this, in the present embodiment, in the case where the first object in the first image and the second object in the second image are recognized and the object similarity value between the first object and the second object is higher than the similarity threshold, in addition to the first object and/or the second object itself, in this embodiment, other objects or image areas in the first image than the first object may be adjusted, or may adjust other objects or image areas in the second image than the second object, or may adjust other objects or image areas in the first image than the first object and simultaneously adjust other objects or image areas in the second image than the second object, so that the display parameters between objects or image areas other than the first object in the first image and objects or image areas other than the second object in the second image are matched. As shown in fig. 14, after the first object "small black x 1" in the first image and the second object "small black x 2" in the second image are recognized in the present embodiment, display pixel values may be set to gray values for other objects and image areas in the first image than the first object "small black x 1", and sets the display pixel values to the gradation values for the other objects and image areas in the second image except for the second object "small black x 2", whereby, after the adjustment is made to the first image and the second image, the display parameters of "small black x 1" and "small black x 2" in the respective images are unchanged, while other areas in the image are in grayscale, where "small black x 1" and "small black x 2" are presented to the user in a more obvious and intuitive manner, the user can visually watch the small black of the two images, and the visual experience of watching the images by the user is enriched.
In one implementation manner, when the first image and/or the second image is adjusted in step 102, the following may be specifically implemented:
first, in this embodiment, a first target object and/or a second target object are obtained, where the first target object and the second target object correspond to an association relationship between the first image and the second image, and at this time, the first target object and the second target object may be the same or different. It should be noted that the association relationship between the first image and the second image can be understood as a corresponding relationship between the first object in the first image and the second object in the second image, such as an identical or approximately consistent relationship, for example, a similar relationship between "small black x 1" and "small black x 2", or a relationship conforming to a certain scene logic relationship, for example, a logic relationship between men and women or a food chain, or a relationship conforming to a transaction process, for example, a relationship needing to be wrapped after bleeding, and so on, and accordingly, in this embodiment, the first target object and/or the second target object is generated according to this relationship, for example, the first target object of love or rose is generated according to the logic relationship between the first object and the second object corresponding to men and women, and the first target object of water drop and the second target object of water drop are generated according to the logic relationship between the first object and the second object of food chain, for another example, according to the logical relationship that the first object and the second object are the old and the child, a first target object of the snack and a second target object of the reject word are generated;
after that, in the present embodiment, the first target object is output in the first image in the first output manner, and/or the second target object is output in the second image in the second output manner.
In the embodiment, when the first target object is generated, the first target object is output in the first image in a first output manner, where the first target object output in the first image may be matched with an image display parameter of the first image, such as color tone matching or transparency matching, and the like, and in addition, the first output manner of the first target object is matched with a scene parameter in the first image, that is, when the first target object is output in the first image, the scene parameter of the first image, such as a style type of a building in the image or a feature state of an object in the image, such as an emotional state of a person in the image or a decoration style of a home in the image, may be analyzed in advance, and the output manner of the corresponding first target object in the first image may correspond to the emotional state of the person in the image or the decoration style of the home, for example, after generating the first target object of love, love is output in the first image in a pink output manner according to the happy emotional state of the male character object in the first image, as shown in fig. 15. Similarly, the second output mode of the second target object is matched with the scene parameters of the second image, and after the second target object of love is generated, according to the unconscious emotional state of the female character object in the second image, the output mode of love gradually changing gray in the second image is output, as shown in fig. 15: the male character expresses a liking to the female character but is rejected by the female character, thereby bringing a more distinctive image viewing experience to the user.
As another example, after the first target object of the snack and the second target object of the reject word are generated, the snack is output in a manner of moving from the geriatric object to the direction of the second image according to the standing posture state of the geriatric object in the first image and the direction relative to the second image, and accordingly, the reject word is output in a manner of blinking at the corresponding position of the small friend in the second image according to the head position of the small friend in the second image, as shown in fig. 16: the elderly give children snacks, and children refuse to take the effect of tooth health, so that the user can enjoy a more distinctive image viewing experience with a dynamic object output effect.
As another example, after the first target object and the second target object of the water droplet are generated, the water droplet is output in a dynamic flashing manner beside the mouth of the tiger in the first image according to the position of the mouth of the tiger in the first image, and the head position of the water droplet is output in a dynamic flashing manner at the rabbit in the second image according to the position of the rabbit in the second image, as shown in fig. 17: the tiger sees the rabbit running water, and the rabbit sees the tiger crying greatly, from this, bring more interesting image enjoyment experience to the user with dynamic object output mode.
For another example, after generating the first target object and the second target object of the rose, the rose is output in the position of the hand of the male person in the first image and the rose is output moving in the direction toward the second image according to the position of the hand of the male person in the first image, and the rose is output moving from the edge near the first image toward the position of the hand of the female person and the rose is output in the position of the hand of the female person in the second image, as shown in fig. 18: the male character gives the rose to the female character, and the female character receives the rose, so that a more interesting image viewing experience is brought to the user in a dynamic object output mode.
In one implementation manner, when the first image and/or the second image is adjusted in step 102, the following may be specifically implemented:
in the embodiment, the first image may be replaced by the third image for output, and/or the second image may be replaced by the fourth image for output;
wherein the third image is associated with the first image with respect to the first object, and the object posture of the first object in the third image is different from the object posture of the first object in the first image, that is, the third image replacing the first image is associated with the first image with respect to the first object, for example, the third image contains the first object in the first image, but the first object in the third image is different from the first object in the first image at the time, such as the object postures of the first object in the third image are different from the object postures of the first object in the first image, and the change information of the object posture of the first object in the third image with respect to the first object in the first image corresponds to the object relationship between the first object and the second object. For example, the expression of the male human object in the first image is crying, the male human object in the third image is the same object as the male human object in the first image, but the expression of the male human object in the third image is open, as shown in fig. 19, when the crying and open emotional change corresponds to a male-female logical relationship between the male human object in the first image and the female human object in the second image.
Similarly, the fourth image is associated with the second image with respect to the second object, and the object pose of the second object in the fourth image is different from the object pose of the second object in the second image; that is, the fourth image replacing the second image is associated with the second image with respect to the second object, for example, the second object in the second image is included in the fourth image, but the second object in the fourth image and the second object in the second image are also different at this time, such as the object posture of the second object in the fourth image and the second image is different, and the change information of the object posture of the second object in the fourth image with respect to the second object in the third image corresponds to the object relationship between the first object and the second object. For example, the female human subject in the second image is in a direction facing away from the first image, and the female human subject in the fourth image is the same subject as the female human subject in the second image, but the female human subject in the fourth image is in a direction facing toward the first image, as shown in fig. 20, when the orientation of the female human subject facing away from the first image and facing toward the first image is changed corresponding to the logical relationship between men and women between the male human subject in the first image and the female human subject in the second image.
For example, there is an object of a tiger that sleeps in the first image, an object of a happy-eating rabbit in the second image, and the tiger and the rabbit conform to a logical relationship of a food chain, at this time, the first image is replaced with a third image in which the tiger object is included but the tiger object has changed from a posture in which it lies asleep to a posture in which it stands and roars toward the second image, and at the same time, the second image is replaced with a fourth image in which the rabbit object is included but the rabbit object has changed from the posture in which it eats to a state in which it runs away from the first image, as shown in fig. 21, whereby in the present embodiment, by changing the first image and the second image, a linked output between the images is achieved, thereby bringing a more novel image viewing experience to the user.
In summary, in the technical solution of the present application, when two electronic photo frames or image output areas are close together, the two output images are adjusted according to the content in the images, so as to generate a linked dynamic effect. For example, the parameter setting is automatically adjusted, the two images are adjusted to be similar values through image parameters such as brightness, color cast value, color saturation, contrast and the like of the tone landscape picture, as shown in fig. 22, the brightness of the two adjacent images is adjusted to be consistent, so as to achieve a relatively uniform and comfortable visual effect; or, a linked dynamic effect is generated according to the content in the images, for example, a man holds a rose in one image and a woman in the other image, when the two images are put together, the man can give the rose to the woman in the other image, as shown in the dynamic display effect shown in fig. 23, and a smile appears on the face of the woman; alternatively, icons may be added to the two images depending on the content of the images, for example one image is a man and one woman in the other, and when the two images are put together, the love icon may be dropped in the two images, as shown in fig. 24.
Therefore, in the technical scheme of the application, the images with the linked dynamic effect can be more interesting and interactive, and more viewing experience is provided for image viewing.
Referring to fig. 25, a schematic structural diagram of a processing apparatus according to a second embodiment of the present disclosure is provided, where the processing apparatus may be configured in an electronic device capable of performing image processing and image output, such as a mobile phone, a pad, or an electronic screen. The device in the embodiment of the application is mainly used for adjusting the two images meeting the processing condition, so that the two adjusted images have visual association, and the visual experience of a user for watching the images is enriched.
Specifically, the apparatus in this embodiment may include the following units:
a detection unit 2501 configured to detect whether or not a second image satisfying a processing condition with a first image exists in a case where the first image is output;
a control unit 2502, configured to, in a case that it is detected that there is a second image and the first image satisfies the processing condition, adjust the first image and/or the second image such that the adjusted first image and the second image have a visual association.
As can be seen from the foregoing, in the processing apparatus according to the second embodiment of the present invention, when it is detected that the processing condition is satisfied between the second image and the first image, the first image and/or the second image are/is adjusted, and the two adjusted images have a visual association, so that the visual experience of the viewing image can be enriched.
In one implementation, the second image and the first image satisfy a processing condition, including:
the device distance between the second output device where the second image is located and the first output device where the first image is located meets the processing condition;
or the region distance between the second output region of the second image on the target output device and the first output region of the first image on the target output device meets the processing condition.
In another implementation, the second image and the first image satisfy a processing condition, including: the first image has a first object therein, the second image has a second object therein, and a correspondence between the first object and the second object satisfies a processing condition.
In one implementation, the control unit 2502 adjusts the first image and/or the second image, including:
adjusting at least one image display parameter of the first image and/or the second image such that the image display parameters of the first image and the second image match.
In one implementation, the control unit 2502 adjusts the first image and/or the second image, including:
obtaining a first object in the first image and a second object in the second image, wherein the object similarity between the first object and the second object is higher than a similarity threshold;
and adjusting the object display parameters of the first object and/or the second object to be matched.
In one implementation, the control unit 2502 adjusts the first image and/or the second image, including:
obtaining a first target object and/or a second target object, wherein the first target object and the second target object correspond to an incidence relation between the first image and the second image; outputting the first target object in the first image in a first output mode and/or outputting the second target object in the second image in a second output mode;
and the first output mode of the first target object is matched with the scene parameters of the first image, and the second output mode of the second target object is matched with the scene parameters of the second image.
In one implementation, the control unit 2502 adjusts the first image and/or the second image, including:
replacing the first image with a third image for outputting, and/or replacing the second image with a fourth image for outputting;
wherein the third image is associated with the first image with respect to a first object and the object pose of the first object in the third image is different from the object pose of the first object in the first image; the fourth image is associated with the second image with respect to a second object, and an object pose of the second object in the fourth image is different from an object pose of the second object in the second image;
and the change information of the object posture of the first object in the third image relative to the first object in the first image corresponds to the object relation between the first object and the second object; the change information of the object pose of the second object in the fourth image relative to the second object in the third image corresponds to the object relationship between the first object and the second object.
It should be noted that, for the specific implementation of each unit in the present embodiment, reference may be made to the corresponding content in the foregoing, and details are not described here.
Fig. 26 is a schematic structural diagram of an electronic device according to a third embodiment of the present disclosure, which is suitable for an electronic device capable of performing image processing and image output, such as a mobile phone or a pad. The electronic device in the embodiment of the application is mainly used for adjusting two images output by the electronic device in different output areas or different electronic screens or displays, so that the two adjusted images have visual association, and the visual experience of a user for watching the images is enriched.
Specifically, the electronic device in this embodiment may include the following structure:
a memory 2601 for storing applications and data generated by the application operations;
a processor 2602 configured to detect, when a first image is output, whether or not there is a second image that is output and the first image that satisfy a processing condition; in case it is detected that there is a second image that satisfies the processing condition with the first image, adjusting the first image and/or the second image such that the adjusted first image has a visual association with the second image.
As can be seen from the foregoing solution, in the electronic device provided in the third embodiment of the present application, when it is detected that the processing condition is satisfied between the second image and the first image, the first image and/or the second image are/is adjusted, so that the two adjusted images have a visual association, and thus, the visual experience of the viewing image can be enriched.
Referring to fig. 27, a schematic structural diagram of an output device according to a fourth embodiment of the present application is provided, where the electronic device may be a device capable of performing image processing and image output, such as an electronic screen or a display, and specifically, reference may be made to the electronic picture frame a or the electronic picture frame B in the foregoing example. The output device in the embodiment of the application is mainly used for adjusting the first image output by the output device and the second image output by other devices, so that the two adjusted images have visual association, and the visual experience of a user for watching the images is enriched.
Specifically, the output device in this embodiment may include the following structure:
a display 2701 for outputting a first image;
the controller 2702 is configured to detect whether there is a second image output by another device and the first image that satisfy a processing condition, adjust the first image when it is detected that there is a second image output by another device and the first image that satisfy the processing condition, and/or trigger the another device to adjust the second image, for example, send an instruction to the another device to adjust the second image, so that the adjusted first image and the adjusted second image have a visual association.
Specifically, the controller 2702 in this embodiment may detect whether there is a second image output by another device through a positioning apparatus or an image collection and detection method, and further detect whether the second image and the first image output by the other device satisfy a processing condition.
As can be seen from the foregoing solutions, in the output device provided in the fourth embodiment of the present application, when it is detected that the processing condition is satisfied between the second image on the other device and the first image on the current output device, the first image and/or the second image is/are adjusted, so that the two adjusted images have a visual association, and thus, the visual experience of viewing the images can be enriched.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of processing, comprising:
in the case of outputting a first image, detecting whether a second image and the first image meet a processing condition, wherein the first image and the second image meet the processing condition that a preset association relationship is satisfied between the output of the first image and the output of the second image, and the association relationship comprises any one of the following: the image output position meets the close range condition, the image content meets the similar condition, and the image content meets the preset scene logic relationship;
in case it is detected that there is a second image that satisfies the processing condition with the first image, adjusting the first image and/or the second image such that the adjusted first image has a visual association with the second image.
2. The method of claim 1, the second image satisfying a processing condition with the first image, comprising:
the device distance between the second output device where the second image is located and the first output device where the first image is located meets the processing condition;
or the like, or, alternatively,
the region distance between the second output region of the second image on the target output device and the first output region of the first image on the target output device satisfies a processing condition.
3. The method of claim 1, the second image satisfying a processing condition with the first image, comprising:
the first image has a first object therein, the second image has a second object therein, and a correspondence between the first object and the second object satisfies a processing condition.
4. The method of claim 1, adjusting the first image and/or the second image, comprising:
adjusting at least one image display parameter of the first image and/or the second image such that the image display parameters of the first image and the second image match.
5. The method of claim 1, adjusting the first image and/or the second image, comprising:
obtaining a first object in the first image and a second object in the second image, wherein the object similarity between the first object and the second object is higher than a similarity threshold;
and adjusting the object display parameters of the first object and/or the second object to be matched.
6. The method of claim 1, adjusting the first image and/or the second image, comprising:
obtaining a first target object and/or a second target object, wherein the first target object and the second target object correspond to an incidence relation between the first image and the second image;
outputting the first target object in the first image in a first output mode and/or outputting the second target object in the second image in a second output mode;
and the first output mode of the first target object is matched with the scene parameters of the first image, and the second output mode of the second target object is matched with the scene parameters of the second image.
7. The method of claim 1, adjusting the first image and/or the second image, comprising:
replacing the first image with a third image for outputting, and/or replacing the second image with a fourth image for outputting;
wherein the third image is associated with the first image with respect to a first object and the object pose of the first object in the third image is different from the object pose of the first object in the first image; the fourth image is associated with the second image with respect to a second object, and an object pose of the second object in the fourth image is different from an object pose of the second object in the second image;
and the change information of the object posture of the first object in the third image relative to the first object in the first image corresponds to the object relation between the first object and the second object; the change information of the object pose of the second object in the fourth image relative to the second object in the third image corresponds to the object relationship between the first object and the second object.
8. A processing apparatus, comprising:
a detecting unit, configured to detect whether a second image and a first image that satisfy a processing condition exist in a case where the first image is output, where the first image and the second image satisfy the processing condition that a preset association relationship is satisfied between outputs of the first image and the second image, and the association relationship includes any one of: the image output position meets the close range condition, the image content meets the similar condition, and the image content meets the preset scene logic relationship;
a control unit, configured to, in a case where it is detected that a second image and the first image satisfy the processing condition, adjust the first image and/or the second image such that the adjusted first image and the second image have a visual association.
9. An electronic device, comprising:
the memory is used for storing the application program and data generated by the running of the application program;
a processor for detecting whether there is an output second image satisfying a processing condition with a first image in a case where the first image is output; in case it is detected that there is a second image that satisfies the processing condition with the first image, adjusting the first image and/or the second image such that the adjusted first image has a visual association with the second image.
10. An output device, comprising:
a display for outputting a first image;
the controller is used for detecting whether a second image output by other equipment and the first image meet the processing condition, wherein the first image and the second image meet the processing condition that the output of the first image and the second image meet a preset association relationship, and the association relationship comprises any one of the following: the image output position meets the close range condition, the image content meets the similar condition, and the image content meets the preset scene logic relationship; in the case that it is detected that there is a second image output by another device and the first image meet the processing condition, adjusting the first image, and/or triggering the other device to adjust the second image so that the adjusted first image and the second image have a visual association.
CN201911366622.5A 2019-12-26 2019-12-26 Processing method and device, electronic equipment and output equipment Active CN111142821B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911366622.5A CN111142821B (en) 2019-12-26 2019-12-26 Processing method and device, electronic equipment and output equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911366622.5A CN111142821B (en) 2019-12-26 2019-12-26 Processing method and device, electronic equipment and output equipment

Publications (2)

Publication Number Publication Date
CN111142821A CN111142821A (en) 2020-05-12
CN111142821B true CN111142821B (en) 2021-08-13

Family

ID=70520395

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911366622.5A Active CN111142821B (en) 2019-12-26 2019-12-26 Processing method and device, electronic equipment and output equipment

Country Status (1)

Country Link
CN (1) CN111142821B (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103929606B (en) * 2014-04-01 2017-10-10 北京智谷睿拓技术服务有限公司 Control method is presented in image and control device is presented in image
US9569701B2 (en) * 2015-03-06 2017-02-14 International Business Machines Corporation Interactive text recognition by a head-mounted device
JP6840481B2 (en) * 2016-07-19 2021-03-10 キヤノン株式会社 Image processing device and image processing method
CN111208906B (en) * 2017-03-28 2021-12-24 联想(北京)有限公司 Method and display system for presenting image
CN107704300A (en) * 2017-09-25 2018-02-16 联想(北京)有限公司 Information processing method and electronic equipment
US11263805B2 (en) * 2018-11-21 2022-03-01 Beijing Boe Optoelectronics Technology Co., Ltd. Method of real-time image processing based on rendering engine and a display apparatus

Also Published As

Publication number Publication date
CN111142821A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
US10609334B2 (en) Group video communication method and network device
US11443462B2 (en) Method and apparatus for generating cartoon face image, and computer storage medium
US9215501B2 (en) Contextual matte bars for aspect ratio formatting
US9792490B2 (en) Systems and methods for enhancement of facial expressions
KR101605983B1 (en) Image recomposition using face detection
WO2017016146A1 (en) Image display method and apparatus
US11417296B2 (en) Information processing device, information processing method, and recording medium
US10354124B2 (en) Electronic apparatus and controlling method for improve the image quality preference of skin area
CN104078028A (en) Screen brightness adjusting method and electronic equipment
WO2020093798A1 (en) Method and apparatus for displaying target image, terminal, and storage medium
US20210312167A1 (en) Server device, terminal device, and display method for controlling facial expressions of a virtual character
US20240070976A1 (en) Object relighting using neural networks
US20240012601A1 (en) Wearable device for facilitating enhanced interaction
CN114494566A (en) Image rendering method and device
US10134164B2 (en) Information processing apparatus, information processing system, information processing method, and program
CN111142821B (en) Processing method and device, electronic equipment and output equipment
CN113875227A (en) Information processing apparatus, information processing method, and program
KR102518203B1 (en) Display method and device, and storage medium
CN113676734A (en) Image compression method and image compression device
CN106909369B (en) User interface display method and system
CN113727171A (en) Video processing method and device and electronic equipment
TW202326398A (en) Automatic screen scenario change system and method
WO2023278809A1 (en) Providing video appearance adjustments within a video communication system
JP2023177395A (en) Information processing device and program
CN117636418A (en) Face detection registration method and system based on Android camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant