CN108848405B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN108848405B
CN108848405B CN201810698567.9A CN201810698567A CN108848405B CN 108848405 B CN108848405 B CN 108848405B CN 201810698567 A CN201810698567 A CN 201810698567A CN 108848405 B CN108848405 B CN 108848405B
Authority
CN
China
Prior art keywords
image
position information
sticker
face
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810698567.9A
Other languages
Chinese (zh)
Other versions
CN108848405A (en
Inventor
陈果
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu kugou business incubator management Co.,Ltd.
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN201810698567.9A priority Critical patent/CN108848405B/en
Publication of CN108848405A publication Critical patent/CN108848405A/en
Application granted granted Critical
Publication of CN108848405B publication Critical patent/CN108848405B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/762Media network packet handling at the source 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation

Abstract

The disclosure relates to an image processing method and device, and belongs to the technical field of electronics. The method comprises the following steps: in the network live broadcast process, acquiring a first image shot by image acquisition equipment; if the mirror image display function is detected to be started, horizontally turning over the first image to obtain a second image, overlapping a preset sticker image in the second image to obtain a local display image, and displaying the local display image; and superposing the sticker image in the first image to obtain a live image, and uploading the live image. With the present disclosure, the operation of superimposing the sticker image can be divided into two steps. The first step is to superimpose the sticker image on the first image that is not turned over, and the second step is to superimpose the sticker image on the second image that is the turned over image. The text in the sticker image is forward for the first image and forward for the second image. By adopting the method and the device, reading obstacles caused by image turnover can be eliminated.

Description

Image processing method and device
Technical Field
The present disclosure relates to the field of electronic technologies, and in particular, to an image processing method and apparatus.
Background
With the development of science and technology, live broadcast applications gradually come into the lives of people. The anchor can show its talent to other viewers using the live application through the live application. When the live broadcast is carried out, a sticker image can be added to the live broadcast image, and words to be spoken to all audiences in the live broadcast room are filled in the sticker image, so that the content of the live broadcast image can be enriched.
In a terminal held by a anchor, when detecting that the anchor triggers an operation of adding a sticker image, the terminal acquires a preset sticker image and superimposes the sticker image on a live image. If the anchor starts the mirror image display function, the terminal horizontally turns over the image on which the sticker image is superimposed, and then displays the image on the terminal held by the anchor because the anchor can experience looking into the mirror. Viewers typically see images that are not flipped horizontally because there is a face-to-face experience with the anchor. Therefore, the image transmitted from the terminal held by the anchor is an image that is not horizontally flipped.
In carrying out the present disclosure, the inventors found that at least the following problems exist:
the image which is horizontally turned is displayed in the terminal held by the anchor, however, since the sticker image is also horizontally turned together, the text on the sticker image is reversed, which causes an obstacle to reading.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides the following technical solutions:
according to a first aspect of embodiments of the present disclosure, there is provided an image processing method, the method including:
in the network live broadcast process, acquiring a first image shot by image acquisition equipment;
if the mirror image display function is detected to be started, horizontally turning over the first image to obtain a second image, overlapping a preset sticker image in the second image to obtain a local display image, and displaying the local display image;
and superposing the paster image in the first image to obtain a live image, and uploading the live image.
Optionally, the method further comprises:
acquiring a first face feature corresponding to the preset sticker image;
determining first position information before turning corresponding to the first facial feature in the first image;
superpose preset sticker image in the second image, include:
determining reversed position information corresponding to the first position information in the second image;
determining second position information of a preset sticker image superposed in the second image according to the turned position information and the relative position relationship between the preset human face features and the sticker image;
superimposing the sticker image on a second location in the second image based on the second location information.
Optionally, the determining, in the second image, the flipped position information corresponding to the first position information includes:
determining an image length of the second image in a horizontal direction;
subtracting the abscissa in the first position information from the image length to obtain the abscissa in the turned position information;
and determining the ordinate in the first position information as the ordinate in the reversed position information.
Optionally, the superimposing the sticker image on a second position in the second image based on the second position information includes:
determining a face rotation angle before turning in the first image;
subtracting the difference of the rotation angle of the face before turning from 180 degrees to determine the rotation angle of the face after turning;
adjusting the angle of the paster image based on the turned face rotation angle;
and based on the second position information, superposing the adjusted paster image on a second position in the second image.
Optionally, the method further comprises:
acquiring second face features corresponding to the sticker images, wherein the second face features and the first face features are a group of symmetrical face features;
determining third position information before turning corresponding to the second face feature in the first image;
superimposing the sticker image in the first image, comprising:
determining fourth position information of the paster image superposed in the first image according to the third position information and the relative position relationship between the preset human face characteristics and the paster image;
superimposing the sticker image on a fourth location in the first image based on the fourth location information.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus, the apparatus including:
the first acquisition module is used for acquiring a first image shot by image acquisition equipment in the live network broadcast process;
the first superposition module is used for horizontally overturning the first image to obtain a second image when the mirror image display function is detected to be started, superposing a preset sticker image in the second image to obtain a local display image, and displaying the local display image;
and the second superposition module is used for superposing the sticker images in the first images to obtain live images and uploading the live images.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring a preset first face characteristic corresponding to the sticker image;
a first determining module, configured to determine, in the first image, first position information before flipping corresponding to the first facial feature;
the first superimposing module includes:
a first determining unit, configured to determine, in the second image, flipped position information corresponding to the first position information;
a second determining unit, configured to determine, according to the turned-over position information and a relative position relationship between a preset face feature and the sticker image, second position information of a preset sticker image superimposed in the second image;
and the first overlaying unit is used for overlaying the sticker image on a second position in the second image based on the second position information.
Optionally, the first determining unit is configured to:
determining an image length of the second image in a horizontal direction;
subtracting the abscissa in the first position information from the image length to obtain the abscissa in the turned position information;
and determining the ordinate in the first position information as the ordinate in the reversed position information.
Optionally, the first superimposing unit is configured to:
determining a face rotation angle before turning in the first image;
subtracting the difference of the rotation angle of the face before turning from 180 degrees to determine the rotation angle of the face after turning;
adjusting the angle of the paster image based on the turned face rotation angle;
and based on the second position information, superposing the adjusted paster image on a second position in the second image.
Optionally, the apparatus further comprises:
the third acquisition module is used for acquiring second face features corresponding to the sticker images, wherein the second face features and the first face features are a group of symmetrical face features;
a second determining module, configured to determine, in the first image, third position information before flipping corresponding to the second face feature;
the second superimposing module includes:
a third determining unit, configured to determine, according to the third position information and a relative position relationship between a preset face feature and the sticker image, fourth position information obtained by superimposing the sticker image on the first image;
and the second overlaying unit is used for overlaying the sticker image on a fourth position in the first image based on the fourth position information.
According to a third aspect of embodiments of the present disclosure, there is provided a computer device comprising a processor, a communication interface, a memory, and a communication bus, wherein:
the processor, the communication interface and the memory complete mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor is used for executing the program stored in the memory so as to realize the image processing method.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium having stored therein a computer program which, when executed by a processor, implements the above-described image processing method.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the method provided by the embodiment of the disclosure, the operation of overlaying the sticker image is divided into two steps. The first step is to superimpose the sticker image on the first image that is not turned over, and the second step is to superimpose the sticker image on the second image that is the turned over image. In this way, even if the second image is obtained by reversing the first image, the second image does not affect the sticker image, and the first image is not reversed together with the sticker image. Thus, the text in the sticker image is forward for the first image and forward for the second image. Therefore, the method provided by the embodiment of the disclosure eliminates reading obstacles caused by image flipping.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. In the drawings:
FIG. 1 is a flow chart illustration of a method of image processing shown in accordance with an exemplary embodiment;
FIG. 2 is a diagram illustrating a flip effect according to an exemplary embodiment;
FIG. 3 is a diagram illustrating a superposition effect, according to an exemplary embodiment;
FIG. 4 is a schematic diagram illustrating a configuration of an image processing apparatus according to an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating a configuration of a computer device, according to an example embodiment.
With the foregoing drawings in mind, certain embodiments of the disclosure have been shown and described in more detail below. These drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the disclosure to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The embodiment of the disclosure provides an image processing method, which can be implemented by computer equipment such as a terminal and can be cooperatively implemented by a server. The terminal can be a mobile phone, a tablet computer, a desktop computer, a notebook computer and the like.
The terminal may include a processor, memory, etc. The processor, which may be a Central Processing Unit (CPU) or the like, may be configured to superimpose the sticker image on the first image to obtain a live image, upload the live image, and perform other processing. The Memory may be a RAM (Random Access Memory), a Flash (Flash Memory), or the like, and may be configured to store received data, data required by the processing procedure, data generated in the processing procedure, or the like, such as a sticker image.
The terminal may also include a transceiver, input components, display components, audio output components, and the like. And the transceiver can be used for data transmission with the server, and the transceiver can comprise a Bluetooth component, a WiFi (Wireless-Fidelity) component, an antenna, a matching circuit, a modem and the like. The input means may be a touch screen, keyboard, mouse, etc. The audio output component may be a speaker, headphones, or the like.
The terminal may have a system program and an application program installed therein. A user uses various applications based on his/her own different needs while using the terminal. The terminal can be provided with an application program with a live broadcast function.
An exemplary embodiment of the present disclosure provides an image processing method, as shown in fig. 1, a processing flow of the method may include the following steps:
step S110, in the process of live webcasting, a first image shot by the image acquisition device is acquired.
In implementation, if the anchor wants to perform live webcasting, an application program with a live webcasting function installed in the terminal can be opened, an option for starting live webcasting is selected, and live webcasting is started. In the live network process, the terminal can start the image acquisition equipment and shoot video images through the image acquisition equipment. The video image is composed of a plurality of frames of still images including the first image and the like.
Step S120, if the mirror image display function is detected to be started, the first image is horizontally turned over to obtain a second image, the preset sticker image is overlapped in the second image to obtain a local display image, and the local display image is displayed.
In implementation, since the anchor is used to look at its mirror image, and thus compared with the mirror-looking feeling, the anchor turns on the mirror display function, and the terminal can detect that the mirror display function is enabled. The image collected by the image collecting device is a forward image, namely a non-mirror image, and the terminal needs to horizontally turn over the first image collected by the image collecting device.
The coordinate system may be established with the lower left corner of the first image as the origin, the lateral direction of the first image being the X-axis, and the longitudinal direction of the first image being the Y-axis. The first image includes each pixel point having a coordinate position corresponding to the coordinate system. And keeping the Y-axis coordinate position of each pixel unchanged, subtracting the X-axis coordinate of each pixel from the total length of the first image, and taking the obtained difference value as the X-axis coordinate of the second image after the first image is horizontally turned. And horizontally turning the first image to obtain a second image.
If the anchor wants to add a sticker image to the live broadcast image to increase the live broadcast atmosphere of the live broadcast room, for example, add a sticker image of a small animal, and add a sticker image that wants to be added to the sticker image, the anchor can select a sticker image that wants to be added, and the terminal can acquire the sticker image, superimpose the sticker image on the second image to obtain a local display image, and display the local display image.
And S130, overlapping the sticker image in the first image to obtain a live image, and uploading the live image.
In practice, although the anchor is used to look at its mirror image, viewers are used to look at the anchor's front image, thus comparing the look and anchor's face-to-face feel. Even if the anchor starts the mirror display function, the forward image is displayed in the terminal held by the viewer. The terminal held by the anchor uploads the first image shot by the image acquisition equipment to the server without horizontal turnover, and then the server pushes the first image to the terminal held by the audience. Thus, the viewer sees a forward image. It should be noted that the terminal held by the anchor may package several frames of images including the first image, upload them to the server, and push the several frames of images including the first image to the terminal held by the viewer by the server.
And if the anchor adds the sticker image in the live image, the sticker image is superposed in the first image in the terminal held by the anchor to obtain the live image. In this way, the viewer sees live images with sticker images.
In step S120 and step S130, the sticker images are added twice, respectively. Once added to the first image and twice added to the flipped image by horizontally flipping the first image. If the first image and the second image are used as the original, the sticker image is added to the original. The sticker image is forward-oriented whether the original is a forward-oriented or flipped image. The sticker image does not flip together with the flipped image.
Optionally, the method provided by the embodiment of the present disclosure further includes: acquiring a first face characteristic corresponding to a preset sticker image; determining first position information before turning corresponding to the first face feature in the first image; the step of superimposing the preset sticker image in the second image may include: determining reversed position information corresponding to the first position information in the second image; determining second position information of the preset sticker image superposed in the second image according to the reversed position information and the relative position relation between the preset human face feature and the sticker image; the sticker image is superimposed on a second location in the second image based on the second location information.
In implementation, a first face feature corresponding to a preset sticker image may be acquired. The face feature recognition operation may be performed on the first image by a face feature recognition algorithm. Or, the first image may be subjected to face feature recognition operation through a trained machine learning model.
For some sticker images, the content of the sticker image is related to the facial features of the anchor, with special requirements on the location of the sticker image. For example, the sticker image is an image of a pair of glasses, and the sticker image needs to be superimposed on the image portion of the eyes of the anchor. As another example, as shown in FIG. 2, to the left of the sticker image is a telephone handset, and a sentence is connected to the telephone handset. The handset needs to fit the anchor's left ear, which is like the anchor is calling a phone. The first facial feature corresponding to the sticker shown in fig. 2 is an ear, specifically the right ear in a human face, because the right ear will become the left ear after flipping.
The first face feature corresponding to the sticker image is the right ear, and first position information corresponding to the right ear can be determined in the first image. The first location information may be a set of coordinate points that may delineate an approximate contour of the right ear. And horizontally turning the first image to obtain a second image. The right ear in the first image also becomes the left ear in the second image. Therefore, the flipped position information corresponding to the left ear in the second image can be determined from the first position information corresponding to the right ear in the first image. The position of the left ear and the sticker image have a relative position relationship, and if the earlobe of the left ear and a preset vertex in the telephone receiver are superposed, the earlobe of the left ear and the preset vertex in the telephone receiver can be aligned according to the reversed position information corresponding to the left ear. Thus, the handset in the sticker image fits the left ear of the anchor.
Furthermore, the size of the sticker image can be adjusted according to the contour information of the left ear, so that the left ear of the anchor and the telephone receiver in the sticker image are matched in size.
In addition to the ears, the facial features may include features such as eyebrows, eyes, nose, mouth, and contours of the face.
Optionally, in the second image, the step of determining the flipped position information corresponding to the first position information may include: determining an image length of the second image in a horizontal direction; subtracting the abscissa in the first position information by the image length to obtain the abscissa in the overturned position information; and determining the ordinate in the first position information as the ordinate in the reversed position information.
In implementation, first, the image length of the second image in the horizontal direction may be determined. The second image may be normalized, and the length of the second image is 1. For example, if the original length of the second image is 10, the second image is shrunk ten times in the horizontal direction, so that the length of the second image becomes 1. The abscissa of each pixel in the second image is correspondingly reduced by ten times.
If the coordinates of the point a are (0.11, 0.85) and the coordinates of the point B are (0.85, 0.92) in the first image. The point A is the left temple of the face, the point B is the right temple of the face, and the point A and the point B are two symmetrical points. After flipping, in the second image, the coordinates of point A become (1-0.11, 0.85) and the coordinates of point B become (1-0.85, 0.92). In the second image, point a becomes the right temple and point B becomes the left temple.
According to the same conversion principle, after the first face feature in the first image is identified, the position of the first face feature in the second image can be correspondingly found. And determining the position information of the first face feature after being turned in the second image by converting the relationship, wherein the calculation cost required by performing face feature recognition operation on the second image is greatly reduced compared with that required by performing face feature recognition operation on the second image again.
The above describes a method for determining position information of a face feature having symmetry. For a human face feature without symmetry, such as the bridge of the nose, there is typically a line segment in the image. If the point C is on the bridge of the nose, the coordinates of the point C in the first image are (0.11, 0.85). After flipping, in the second image, the C point coordinates become (1-0.11, 0.85), (1-0.11, 0.85) points on the bridge of the nose in the second image, there is no case of left-right feature exchange.
Alternatively, the step of superimposing the sticker image on a second position in the second image based on the second position information may include: determining the rotation angle of the face before turning in the first image; subtracting the difference of the rotation angle of the face before turning from 180 degrees to determine the rotation angle of the face after turning; adjusting the angle of the sticker image based on the rotation angle of the turned face; and based on the second position information, the adjusted paster image is superposed on the second position in the second image.
In practice, the face of the anchor is not necessarily directly facing the lens, and there may be a certain rotation angle. First, a rotation angle of the face before flipping in the first image, for example, 50 degrees, may be determined, and a difference value of the rotation angle of the face before flipping subtracted from 180 degrees by 130 degrees may be determined as the rotation angle of the face after flipping. If the default angle of the sticker image is 0 degree, namely the sticker image is horizontally placed, the angle of the sticker image can be adjusted according to the turned face rotation angle, so that the adjusted sticker image is more fit with the actual face rotation angle. For example, the sticker image is glasses, and when the face of the anchor has a rotation angle relative to the lens, the glasses "worn" by the anchor will also rotate by the same angle.
Optionally, the method provided by the embodiment of the present disclosure further includes: acquiring second face features corresponding to the sticker images, wherein the second face features and the first face features are a group of symmetrical face features; determining third position information before turning corresponding to the second face features in the first image; the step of superimposing the sticker image in the first image may include: determining fourth position information of the paster image superposed in the first image according to the third position information and the relative position relation between the preset human face feature and the paster image; the sticker image is superimposed on a fourth location in the first image based on the fourth location information.
In practice, how to superimpose the sticker image on the second image, i.e., the flipped image, is described above, and how to superimpose the sticker image on the first image will be described below. In this way, after the sticker image is superimposed on the first image, a live image is obtained, which can be uploaded for viewing by a viewer.
First, in the previous step, a first facial feature in the first image has been identified, and at the same time, a second facial feature of the first image may also be identified. For example, the first facial feature in the first image is the right ear and the second facial feature in the first image is the left ear to be superimposed on the left ear in the image. Because the first image does not need to be turned over, the second human face features corresponding to the sticker image can be directly identified. Then, third position information before turning corresponding to the second face feature can be determined in the first image. Then, fourth position information of the sticker image superimposed in the first image can be determined according to the third position information and the preset relative position relationship between the human face feature and the sticker image. Finally, the sticker image may be superimposed on a fourth location in the first image based on the fourth location information.
For example, as shown in fig. 3, the position of the left ear and the sticker image have a relative positional relationship, the earlobe of the left ear and a predetermined vertex in the telephone receiver are coincident, and the earlobe of the left ear and the predetermined vertex in the telephone receiver may be aligned according to the third positional information corresponding to the left ear. Thus, the handset in the sticker image fits the left ear of the anchor.
In the method provided by the embodiment of the disclosure, the operation of overlaying the sticker image is divided into two steps. The first step is to superimpose the sticker image on the first image that is not turned over, and the second step is to superimpose the sticker image on the second image that is the turned over image. In this way, even if the second image is obtained by reversing the first image, the second image does not affect the sticker image, and the first image is not reversed together with the sticker image. Thus, the text in the sticker image is forward for the first image and forward for the second image. Therefore, the method provided by the embodiment of the disclosure eliminates reading obstacles caused by image flipping.
Still another exemplary embodiment of the present disclosure provides an image processing apparatus, as shown in fig. 4, including:
the first obtaining module 410 is configured to obtain a first image captured by an image capturing device in a live webcast process;
the first superimposing module 420 is configured to, when it is detected that the mirror image display function is enabled, horizontally flip the first image to obtain a second image, superimpose a preset sticker image on the second image to obtain a local display image, and display the local display image;
and the second superposition module 430 is used for superposing the sticker image in the first image to obtain a live image and uploading the live image.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring a preset first face characteristic corresponding to the sticker image;
a first determining module, configured to determine, in the first image, first position information before flipping corresponding to the first facial feature;
the first overlay module 420 includes:
a first determining unit, configured to determine, in the second image, flipped position information corresponding to the first position information;
a second determining unit, configured to determine, according to the turned-over position information and a relative position relationship between a preset face feature and the sticker image, second position information of a preset sticker image superimposed in the second image;
and the first overlaying unit is used for overlaying the sticker image on a second position in the second image based on the second position information.
Optionally, the first determining unit is configured to:
determining an image length of the second image in a horizontal direction;
subtracting the abscissa in the first position information from the image length to obtain the abscissa in the turned position information;
and determining the ordinate in the first position information as the ordinate in the reversed position information.
Optionally, the first superimposing unit is configured to:
determining a face rotation angle before turning in the first image;
subtracting the difference of the rotation angle of the face before turning from 180 degrees to determine the rotation angle of the face after turning;
adjusting the angle of the paster image based on the turned face rotation angle;
and based on the second position information, superposing the adjusted paster image on a second position in the second image.
Optionally, the apparatus further comprises:
the third acquisition module is used for acquiring second face features corresponding to the sticker images, wherein the second face features and the first face features are a group of symmetrical face features;
a second determining module, configured to determine, in the first image, third position information before flipping corresponding to the second face feature;
the second overlay module 430 includes:
a third determining unit, configured to determine, according to the third position information and a relative position relationship between a preset face feature and the sticker image, fourth position information obtained by superimposing the sticker image on the first image;
and the second overlaying unit is used for overlaying the sticker image on a fourth position in the first image based on the fourth position information.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In the apparatus provided by the embodiment of the present disclosure, the operation of superimposing the sticker image is divided into two steps. The first step is to superimpose the sticker image on the first image that is not turned over, and the second step is to superimpose the sticker image on the second image that is the turned over image. In this way, even if the second image is obtained by reversing the first image, the second image does not affect the sticker image, and the first image is not reversed together with the sticker image. Thus, the text in the sticker image is forward for the first image and forward for the second image. Therefore, the device provided by the embodiment of the disclosure eliminates reading obstacles caused by image overturn.
It should be noted that: in the image processing apparatus provided in the above embodiment, when processing an image, only the division of the above functional modules is taken as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the terminal is divided into different functional modules to complete all or part of the above described functions. In addition, the image processing apparatus and the image processing method provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments in detail and are not described herein again.
Fig. 5 shows a schematic structural diagram of a computer device 1800 according to an exemplary embodiment of the present disclosure. The computer device 1800 may be: smart phones, tablet computers, MP3 players (Moving Picture Experts group Audio Layer III, motion video Experts compression standard Audio Layer 3), MP4 players (Moving Picture Experts compression standard Audio Layer IV, motion video Experts compression standard Audio Layer 4), notebook computers, or desktop computers. Computer device 1800 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and the like.
Generally, computer device 1800 includes: a processor 1801 and a memory 1802.
The processor 1801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 1801 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1801 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1801 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing content required to be displayed on the display screen. In some embodiments, the processor 1801 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1802 may include one or more computer-readable storage media, which may be non-transitory. Memory 1802 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1802 is used to store at least one instruction for execution by processor 1801 to implement the image processing methods provided by the method embodiments herein.
In some embodiments, computer device 1800 may also optionally include: a peripheral interface 1803 and at least one peripheral. The processor 1801, memory 1802, and peripheral interface 1803 may be connected by a bus or signal line. Each peripheral device may be connected to the peripheral device interface 1803 by a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1804, touch screen display 1805, camera 1806, audio circuitry 1807, positioning components 1808, and power supply 1809.
The peripheral interface 1803 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1801 and the memory 1802. In some embodiments, the processor 1801, memory 1802, and peripheral interface 1803 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1801, the memory 1802, and the peripheral device interface 1803 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1804 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 1804 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 1804 converts electrical signals into electromagnetic signals for transmission, or converts received electromagnetic signals into electrical signals. Optionally, the radio frequency circuitry 1804 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 1804 may communicate with other computer devices via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1804 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 1805 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1805 is a touch display screen, the display screen 1805 also has the ability to capture touch signals on or over the surface of the display screen 1805. The touch signal may be input to the processor 1801 as a control signal for processing. At this point, the display 1805 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1805 may be one, providing a front panel of the computer device 1800; in other embodiments, the number of the display screens 1805 may be at least two, respectively disposed on different surfaces of the computer device 1800 or in a foldable design; in still other embodiments, the display 1805 may be a flexible display disposed on a curved surface or on a folded surface of the computer device 1800. Even more, the display 1805 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display 1805 may be made of LCD (Liquid Crystal Display), OLED (organic light-Emitting Diode), or the like.
The camera assembly 1806 is used to capture images or video. Optionally, the camera assembly 1806 includes a front camera and a rear camera. Generally, a front camera is disposed on a front panel of a computer apparatus, and a rear camera is disposed on a rear surface of the computer apparatus. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1806 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1807 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1801 for processing or inputting the electric signals to the radio frequency circuit 1804 to achieve voice communication. The microphones may be multiple and placed at different locations on the computer device 1800 for stereo sound capture or noise reduction purposes. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1801 or the radio frequency circuitry 1804 to sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 1807 may also include a headphone jack.
The Location component 1808 is used to locate a current geographic Location of the computer device 1800 for navigation or LBS (Location Based Service). The Positioning component 1808 may be a Positioning component based on a Global Positioning System (GPS) in the united states, a beidou System in china, or a galileo System in russia.
The power supply 1809 is used to power various components within the computer device 1800. The power supply 1809 may be ac, dc, disposable or rechargeable. When the power supply 1809 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, computer device 1800 also includes one or more sensors 1810. The one or more sensors 1810 include, but are not limited to: acceleration sensor 1811, gyro sensor 1812, pressure sensor 1813, fingerprint sensor 1814, optical sensor 1815, and proximity sensor 1816.
The acceleration sensor 1811 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the computer apparatus 1800. For example, the acceleration sensor 1811 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1801 may control the touch display 1805 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1811. The acceleration sensor 1811 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1812 may detect a body direction and a rotation angle of the computer device 1800, and the gyro sensor 1812 may cooperate with the acceleration sensor 1811 to collect a 3D motion of the user on the computer device 1800. The processor 1801 may implement the following functions according to the data collected by the gyro sensor 1812: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 1813 may be disposed on the side bezel of computer device 1800 and/or on the lower layer of touch display 1805. When the pressure sensor 1813 is disposed on a side frame of the computer device 1800, a user's holding signal to the computer device 1800 can be detected, and the processor 1801 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1813. When the pressure sensor 1813 is disposed at the lower layer of the touch display screen 1805, the processor 1801 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1805. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1814 is used to collect the fingerprint of the user, and the processor 1801 identifies the user according to the fingerprint collected by the fingerprint sensor 1814, or the fingerprint sensor 1814 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the processor 1801 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 1814 may be disposed on the front, back, or side of the computer device 1800. When a physical key or vendor Logo is provided on the computer device 1800, the fingerprint sensor 1814 may be integrated with the physical key or vendor Logo.
The optical sensor 1815 is used to collect the ambient light intensity. In one embodiment, the processor 1801 may control the display brightness of the touch display 1805 based on the ambient light intensity collected by the optical sensor 1815. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1805 is increased; when the ambient light intensity is low, the display brightness of the touch display 1805 is turned down. In another embodiment, the processor 1801 may also dynamically adjust the shooting parameters of the camera assembly 1806 according to the intensity of the ambient light collected by the optical sensor 1815.
A proximity sensor 1816, also known as a distance sensor, is typically provided on the front panel of the computer device 1800. The proximity sensor 1816 is used to gather the distance between the user and the front of the computer device 1800. In one embodiment, the touch display 1805 is controlled by the processor 1801 to switch from the light screen state to the rest screen state when the proximity sensor 1816 detects that the distance between the user and the front of the computer device 1800 is gradually decreased; when the proximity sensor 1816 detects that the distance between the user and the front of the computer device 1800 is gradually increasing, the touch display 1805 is controlled by the processor 1801 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration illustrated in FIG. 5 is not intended to be limiting of the computer device 1800 and may include more or fewer components than those illustrated, or may combine certain components, or may employ a different arrangement of components.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An image processing method, characterized in that the method comprises:
in the network live broadcast process, acquiring a first image shot by image acquisition equipment;
acquiring a first face characteristic corresponding to a preset sticker image;
determining first position information before turning corresponding to the first facial feature in the first image;
if the mirror image display function is detected to be started, horizontally overturning the first image to obtain a second image, and determining overturned position information corresponding to the first position information in the second image;
determining second position information of a preset sticker image superposed in the second image according to the turned position information and the relative position relationship between the preset human face features and the sticker image;
based on the second position information, the sticker image is superposed on a second position in the second image to obtain a local display image, and the local display image is displayed;
and superposing the paster image in the first image to obtain a live image, and uploading the live image.
2. The method of claim 1, wherein determining the flipped location information corresponding to the first location information in the second image comprises:
determining an image length of the second image in a horizontal direction;
subtracting the abscissa in the first position information from the image length to obtain the abscissa in the turned position information;
and determining the ordinate in the first position information as the ordinate in the reversed position information.
3. The method of claim 1, wherein the superimposing the sticker image on a second location in the second image based on the second location information comprises:
determining a face rotation angle before turning in the first image;
subtracting the difference of the rotation angle of the face before turning from 180 degrees to determine the rotation angle of the face after turning;
adjusting the angle of the paster image based on the turned face rotation angle;
and based on the second position information, superposing the adjusted paster image on a second position in the second image.
4. The method of claim 1, further comprising:
acquiring second face features corresponding to the sticker images, wherein the second face features and the first face features are a group of symmetrical face features;
determining third position information before turning corresponding to the second face feature in the first image;
superimposing the sticker image in the first image, comprising:
determining fourth position information of the paster image superposed in the first image according to the third position information and the relative position relationship between the preset human face characteristics and the paster image;
superimposing the sticker image on a fourth location in the first image based on the fourth location information.
5. An image processing apparatus, characterized in that the apparatus comprises:
the first acquisition module is used for acquiring a first image shot by image acquisition equipment in the live network broadcast process;
the second acquisition module is used for acquiring a first face characteristic corresponding to a preset sticker image;
a first determining module, configured to determine, in the first image, first position information before flipping corresponding to the first facial feature;
the first superposition module is used for horizontally overturning the first image to obtain a second image when the mirror image display function is detected to be started, superposing a preset sticker image in the second image to obtain a local display image, and displaying the local display image;
the second superposition module is used for superposing the sticker image in the first image to obtain a live image and uploading the live image;
the first superimposing module includes:
a first determining unit, configured to determine, in the second image, flipped position information corresponding to the first position information;
a second determining unit, configured to determine, according to the turned-over position information and a relative position relationship between a preset face feature and the sticker image, second position information of a preset sticker image superimposed in the second image;
and the first overlaying unit is used for overlaying the sticker image on a second position in the second image based on the second position information.
6. The apparatus of claim 5, wherein the first determining unit is configured to:
determining an image length of the second image in a horizontal direction;
subtracting the abscissa in the first position information from the image length to obtain the abscissa in the turned position information;
and determining the ordinate in the first position information as the ordinate in the reversed position information.
7. The apparatus of claim 5, wherein the first superimposing unit is configured to:
determining a face rotation angle before turning in the first image;
subtracting the difference of the rotation angle of the face before turning from 180 degrees to determine the rotation angle of the face after turning;
adjusting the angle of the paster image based on the turned face rotation angle;
and based on the second position information, superposing the adjusted paster image on a second position in the second image.
8. The apparatus of claim 5, further comprising:
the third acquisition module is used for acquiring second face features corresponding to the sticker images, wherein the second face features and the first face features are a group of symmetrical face features;
a second determining module, configured to determine, in the first image, third position information before flipping corresponding to the second face feature;
the second superimposing module includes:
a third determining unit, configured to determine, according to the third position information and a relative position relationship between a preset face feature and the sticker image, fourth position information obtained by superimposing the sticker image on the first image;
and the second overlaying unit is used for overlaying the sticker image on a fourth position in the first image based on the fourth position information.
9. A computer device, comprising a processor, a communication interface, a memory, and a communication bus, wherein:
the processor, the communication interface and the memory complete mutual communication through the communication bus;
the memory is used for storing a computer program;
the processor is configured to execute the program stored in the memory to implement the method steps of any of claims 1-4.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 4.
CN201810698567.9A 2018-06-29 2018-06-29 Image processing method and device Active CN108848405B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810698567.9A CN108848405B (en) 2018-06-29 2018-06-29 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810698567.9A CN108848405B (en) 2018-06-29 2018-06-29 Image processing method and device

Publications (2)

Publication Number Publication Date
CN108848405A CN108848405A (en) 2018-11-20
CN108848405B true CN108848405B (en) 2020-10-09

Family

ID=64200079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810698567.9A Active CN108848405B (en) 2018-06-29 2018-06-29 Image processing method and device

Country Status (1)

Country Link
CN (1) CN108848405B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022037215A1 (en) * 2020-08-21 2022-02-24 海信视像科技股份有限公司 Camera, display device and camera control method
CN113542850A (en) * 2020-08-21 2021-10-22 海信视像科技股份有限公司 Display device and data processing method
CN113793410A (en) * 2021-08-31 2021-12-14 北京达佳互联信息技术有限公司 Video processing method and device, electronic equipment and storage medium
CN113992935B (en) * 2021-12-24 2022-06-14 北京达佳互联信息技术有限公司 Live broadcast preview method and device, electronic equipment, storage medium and product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005262698A (en) * 2004-03-19 2005-09-29 Tatsumi Denshi Kogyo Kk Automatic photograph forming device, method of automatically forming photograph, and printing medium
CN105959564A (en) * 2016-06-15 2016-09-21 维沃移动通信有限公司 Photographing method and mobile terminal
CN106295533A (en) * 2016-08-01 2017-01-04 厦门美图之家科技有限公司 Optimization method, device and the camera terminal of a kind of image of autodyning
CN106331880A (en) * 2016-09-09 2017-01-11 腾讯科技(深圳)有限公司 Information processing method and information processing system
CN108076303A (en) * 2016-11-11 2018-05-25 中兴通讯股份有限公司 A kind of video image display method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005262698A (en) * 2004-03-19 2005-09-29 Tatsumi Denshi Kogyo Kk Automatic photograph forming device, method of automatically forming photograph, and printing medium
CN105959564A (en) * 2016-06-15 2016-09-21 维沃移动通信有限公司 Photographing method and mobile terminal
CN106295533A (en) * 2016-08-01 2017-01-04 厦门美图之家科技有限公司 Optimization method, device and the camera terminal of a kind of image of autodyning
CN106331880A (en) * 2016-09-09 2017-01-11 腾讯科技(深圳)有限公司 Information processing method and information processing system
CN108076303A (en) * 2016-11-11 2018-05-25 中兴通讯股份有限公司 A kind of video image display method and device

Also Published As

Publication number Publication date
CN108848405A (en) 2018-11-20

Similar Documents

Publication Publication Date Title
CN110992493B (en) Image processing method, device, electronic equipment and storage medium
CN108401124B (en) Video recording method and device
CN108848405B (en) Image processing method and device
CN109859102B (en) Special effect display method, device, terminal and storage medium
CN111464830B (en) Method, device, system, equipment and storage medium for image display
CN111028144B (en) Video face changing method and device and storage medium
CN110933452B (en) Method and device for displaying lovely face gift and storage medium
CN110533585B (en) Image face changing method, device, system, equipment and storage medium
CN109302632B (en) Method, device, terminal and storage medium for acquiring live video picture
CN110839174A (en) Image processing method and device, computer equipment and storage medium
WO2021238564A1 (en) Display device and distortion parameter determination method, apparatus and system thereof, and storage medium
CN110837300B (en) Virtual interaction method and device, electronic equipment and storage medium
CN111083513B (en) Live broadcast picture processing method and device, terminal and computer readable storage medium
CN108965769B (en) Video display method and device
CN110891181B (en) Live broadcast picture display method and device, storage medium and terminal
CN110933454B (en) Method, device, equipment and storage medium for processing live broadcast budding gift
CN110992268A (en) Background setting method, device, terminal and storage medium
CN112967261B (en) Image fusion method, device, equipment and storage medium
CN111860064A (en) Target detection method, device and equipment based on video and storage medium
CN111369434B (en) Method, device, equipment and storage medium for generating spliced video covers
CN110312144B (en) Live broadcast method, device, terminal and storage medium
CN108881715B (en) Starting method and device of shooting mode, terminal and storage medium
CN109561215B (en) Method, device, terminal and storage medium for controlling beautifying function
CN109275015B (en) Method, device and storage medium for displaying virtual article
CN108881739B (en) Image generation method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220408

Address after: 4119, 41st floor, building 1, No.500, middle section of Tianfu Avenue, Chengdu hi tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu, Sichuan 610000

Patentee after: Chengdu kugou business incubator management Co.,Ltd.

Address before: No. 315, Huangpu Avenue middle, Tianhe District, Guangzhou City, Guangdong Province

Patentee before: GUANGZHOU KUGOU COMPUTER TECHNOLOGY Co.,Ltd.