CN107665482B - Video data real-time processing method and device for realizing double exposure and computing equipment - Google Patents

Video data real-time processing method and device for realizing double exposure and computing equipment Download PDF

Info

Publication number
CN107665482B
CN107665482B CN201710864259.4A CN201710864259A CN107665482B CN 107665482 B CN107665482 B CN 107665482B CN 201710864259 A CN201710864259 A CN 201710864259A CN 107665482 B CN107665482 B CN 107665482B
Authority
CN
China
Prior art keywords
image
preset
current frame
video data
frame image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710864259.4A
Other languages
Chinese (zh)
Other versions
CN107665482A (en
Inventor
张望
邱学侃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201710864259.4A priority Critical patent/CN107665482B/en
Publication of CN107665482A publication Critical patent/CN107665482A/en
Application granted granted Critical
Publication of CN107665482B publication Critical patent/CN107665482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/162Detection; Localisation; Normalisation using pixel segmentation or colour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Studio Circuits (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a device for processing video data in real time and a computing device for realizing double exposure, wherein the method comprises the following steps: acquiring a current frame image containing a specific object in a video shot and/or recorded by image acquisition equipment in real time; or, acquiring a current frame image containing a specific object in a currently played video in real time, and performing scene segmentation processing on the current frame image to obtain a foreground image of the current frame image for the specific object; detecting key information of the current frame image, and determining a specific area belonging to a specific object; loading a preset background image for the foreground image, and overlapping the preset foreground image on a partial area which does not belong to a specific area in the foreground image to obtain an image after the current frame is processed; covering the processed image of the current frame with the original image of the current frame to obtain processed video data; and displaying the processed video data. The invention adopts a deep learning method, and realizes the scene segmentation processing with high efficiency and high accuracy.

Description

Video data real-time processing method and device for realizing double exposure and computing equipment
Technical Field
The invention relates to the field of image processing, in particular to a method and a device for processing video data in real time and computing equipment for realizing double exposure.
Background
With the development of science and technology, the technology of image acquisition equipment is also increasing day by day. The video recorded by the image acquisition equipment is clearer, and the resolution and the display effect are also greatly improved. However, the existing recorded videos are only monotonous recorded materials, and cannot meet more and more personalized requirements provided by users. In the prior art, after a video is recorded, a user can manually further process the video so as to meet the personalized requirements of the user. However, such processing requires a user to have a high image processing technology, and requires a long time for the user to perform the processing, which is complicated in processing and complicated in technology.
Therefore, a real-time video data processing method is needed to meet the personalization requirements of users in real time.
Disclosure of Invention
In view of the above problems, the present invention is proposed to provide a method and apparatus for real-time processing of video data, a computing device, and a computer program product for implementing double exposure that overcome or at least partially solve the above problems.
According to an aspect of the present invention, there is provided a real-time video data processing method for implementing double exposure, including:
acquiring a current frame image containing a specific object in a video shot and/or recorded by image acquisition equipment in real time; or, acquiring a current frame image containing a specific object in a currently played video in real time;
carrying out scene segmentation processing on the current frame image to obtain a foreground image of the current frame image aiming at a specific object;
detecting key information of the current frame image, and determining a specific area belonging to a specific object;
loading a preset background image for the foreground image, and overlapping the preset foreground image on a partial area which does not belong to a specific area in the foreground image to obtain an image after the current frame is processed;
covering the processed image of the current frame with the original image of the current frame to obtain processed video data;
and displaying the processed video data.
Optionally, the detecting the key information of the current frame image, and the determining the specific region belonging to the specific object further includes: and detecting key point information of the current frame image, and determining a specific area belonging to a specific object.
Optionally, the detecting the key information of the current frame image, and the determining the specific region belonging to the specific object further includes: and detecting key point information and color information of the current frame image, and determining a specific area belonging to a specific object.
Optionally, before obtaining the processed image of the current frame, the method further includes: and correspondingly processing the specific area of the foreground image according to the display style mode of the preset background image and/or the preset foreground image.
Optionally, the correspondingly processing the specific area of the foreground image further includes: and (3) performing buffing and/or color mixing treatment on a specific area of the foreground image.
Optionally, the specific object is a person; the specific region of the specific object is a face region;
detecting key information of the current frame image, and determining a specific area belonging to a specific object further comprises:
detecting key points of the current frame image, and determining the five sense organ regions of the figure;
carrying out skin color detection on the current frame image to determine a skin color area of the figure;
and determining the face area of the person according to the five sense organ area and the skin color area of the person.
Optionally, the preset foreground image is a first preset picture; the preset background image is a second preset picture.
Optionally, the method further comprises:
and carrying out different color matching processing on the third preset picture to respectively obtain a preset foreground image and a preset background image.
Optionally, the preset foreground image is a frame image in the first preset video; the preset background image is a frame image in the second preset video.
Optionally, the method further comprises:
and carrying out different color matching processing on the frame image in the third preset video to respectively obtain a preset foreground image and a preset background image.
Optionally, displaying the processed video data further comprises: displaying the processed video data in real time;
the method further comprises the following steps: and uploading the processed video data to a cloud server.
Optionally, uploading the processed video data to a cloud server further includes:
and uploading the processed video data to a cloud video platform server so that the cloud video platform server can display the video data on a cloud video platform.
Optionally, uploading the processed video data to a cloud server further includes:
and uploading the processed video data to a cloud live broadcast server so that the cloud live broadcast server can push the video data to a client of a watching user in real time.
Optionally, uploading the processed video data to a cloud server further includes:
and uploading the processed video data to a cloud public server so that the cloud public server pushes the video data to a public attention client.
According to another aspect of the present invention, there is provided a video data real-time processing apparatus for implementing double exposure, including:
the acquisition module is suitable for acquiring a current frame image containing a specific object in a video shot and/or recorded by image acquisition equipment in real time; or, acquiring a current frame image containing a specific object in a currently played video in real time;
the segmentation module is suitable for carrying out scene segmentation processing on the current frame image to obtain a foreground image of the current frame image aiming at a specific object;
the detection module is suitable for detecting key information of the current frame image and determining a specific area belonging to a specific object;
the superposition module is suitable for loading a preset background image for the foreground image, and superposing the preset foreground image on a partial area which does not belong to a specific area in the foreground image to obtain an image after the current frame is processed;
the covering module is suitable for covering the processed image of the current frame with the original frame image to obtain processed video data;
and the display module is suitable for displaying the processed video data.
Optionally, the detection module is further adapted to:
and detecting key point information of the current frame image, and determining a specific area belonging to a specific object.
Optionally, the detection module is further adapted to:
and detecting key point information and color information of the current frame image, and determining a specific area belonging to a specific object.
Optionally, the apparatus further comprises:
and the processing module is suitable for correspondingly processing the specific area of the foreground image according to the display style mode of the preset background image and/or the preset foreground image.
Optionally, the processing module is further adapted to:
and (3) performing buffing and/or color mixing treatment on a specific area of the foreground image.
Optionally, the specific object is a person; the specific region of the specific object is a face region;
the detection module is further adapted to: detecting key points of the current frame image, and determining the five sense organ regions of the figure; carrying out skin color detection on the current frame image to determine a skin color area of the figure; and determining the face area of the person according to the five sense organ area and the skin color area of the person.
Optionally, the preset foreground image is a first preset picture; the preset background image is a second preset picture.
Optionally, the apparatus further comprises:
and the first color matching processing module is suitable for carrying out different color matching processing on the third preset picture to respectively obtain a preset foreground image and a preset background image.
Optionally, the preset foreground image is a frame image in the first preset video; the preset background image is a frame image in the second preset video.
Optionally, the apparatus further comprises:
and the second color matching processing module is suitable for performing different color matching processing on the frame image in the third preset video to respectively obtain a preset foreground image and a preset background image.
Optionally, the display module is further adapted to display the processed video data in real time;
the device still includes:
and the uploading module is suitable for uploading the processed video data to the cloud server.
Optionally, the upload module is further adapted to:
and uploading the processed video data to a cloud video platform server so that the cloud video platform server can display the video data on a cloud video platform.
Optionally, the upload module is further adapted to:
and uploading the processed video data to a cloud live broadcast server so that the cloud live broadcast server can push the video data to a client of a watching user in real time.
Optionally, the upload module is further adapted to:
and uploading the processed video data to a cloud public server so that the cloud public server pushes the video data to a public attention client.
According to yet another aspect of the present invention, there is provided a computing device comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the video data real-time processing method for realizing double exposure.
According to still another aspect of the present invention, there is provided a computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to perform operations corresponding to the video data real-time processing method for implementing double exposure as described above.
According to the method, the device and the computing equipment for realizing the real-time processing of the video data of the double exposure, provided by the invention, the current frame image containing the specific object in the video shot and/or recorded by the image acquisition equipment is obtained in real time; or, acquiring a current frame image containing a specific object in a currently played video in real time, and performing scene segmentation processing on the current frame image to obtain a foreground image of the current frame image for the specific object; detecting key information of the current frame image, and determining a specific area belonging to a specific object; loading a preset background image for the foreground image, and overlapping the preset foreground image on a partial area which does not belong to a specific area in the foreground image to obtain an image after the current frame is processed; covering the processed image of the current frame with the original image of the current frame to obtain processed video data; and displaying the processed video data. After the current frame image is acquired in real time, the foreground image of the specific object is segmented from the current frame image, and the specific area of the specific object is determined according to the detection of key information of the current frame image. On the premise of reserving the specific area, the partial area which does not belong to the specific area in the foreground image is overlapped with the preset foreground image, and the preset background image is loaded, so that the double exposure special effect in the video data is realized. The invention adopts a deep learning method, and realizes scene segmentation processing with high efficiency and high accuracy. And the user does not need to additionally process the recorded video, so that the time of the user is saved, the processed video data can be displayed for the user in real time, and the user can conveniently check the display effect. Meanwhile, the technical level of the user is not limited, and the use by the public is facilitated.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 shows a flow chart of a method for real-time processing of video data implementing double exposure according to an embodiment of the invention;
fig. 2 is a flowchart illustrating a method of real-time processing of video data implementing double exposure according to another embodiment of the present invention;
FIG. 3 shows a functional block diagram of a video data real-time processing apparatus implementing double exposure according to one embodiment of the present invention;
fig. 4 shows a functional block diagram of a video data real-time processing apparatus implementing double exposure according to another embodiment of the present invention;
FIG. 5 illustrates a schematic structural diagram of a computing device, according to an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The specific object in the present invention may be any object such as a person, a plant, an animal, etc. in the image, and in the embodiment, the person is exemplified as the specific object, and the specific area is exemplified as the face area of the person, but the specific object is not limited to the person and the face area.
Fig. 1 shows a flowchart of a video data real-time processing method for implementing double exposure according to an embodiment of the present invention. As shown in fig. 1, the method for processing video data in real time to realize double exposure specifically includes the following steps:
step S101, acquiring a current frame image containing a specific object in a video shot and/or recorded by image acquisition equipment in real time; or, the current frame image containing the specific object in the currently played video is acquired in real time.
In this embodiment, the image capturing device is described by taking a mobile terminal as an example. And acquiring a current frame image of a camera of the mobile terminal when recording a video or shooting the video in real time. Since the specific object is processed by the method, only the current frame image containing the specific object is acquired when the current frame image is acquired. Besides acquiring the video shot and/or recorded by the image acquisition equipment in real time, the current frame image containing the specific object in the currently played video can be acquired in real time.
Step S102, carrying out scene segmentation processing on the current frame image to obtain a foreground image of the current frame image aiming at a specific object.
The current frame image includes a specific object such as a person. The current frame image is subjected to scene segmentation processing, mainly a specific object is segmented from the current frame image, so as to obtain a foreground image for the specific object, and the foreground image can only contain the specific object.
When the scene segmentation processing is performed on the current frame image, a deep learning method may be utilized. Deep learning is a method based on characterization learning of data in machine learning. An observation (e.g., an image) may be represented using a number of ways, such as a vector of intensity values for each pixel, or more abstractly as a series of edges, a specially shaped region, etc. Tasks (e.g., face recognition or facial expression recognition) are more easily learned from the examples using some specific representation methods. For example, a character segmentation method using deep learning can perform scene segmentation on the current frame image to obtain a foreground image containing characters.
Step S103, detecting key information of the current frame image, and determining a specific region belonging to a specific object.
In order to determine the specific area, key information detection needs to be performed on the current frame image. Specifically, the key information of the specific area may be extracted from the current frame image, and the detection may be performed according to the key information. The key information may be key point information, key area information, and/or key line information. The embodiment of the present invention is described by taking the key point information as an example, but the key point information of the present invention is not limited to the key point information. The processing speed and efficiency of determining the specific area according to the key point information can be improved by using the key point information, the specific area can be determined directly according to the key point information, and complex operations such as subsequent calculation, analysis and the like on the key information are not needed. Meanwhile, the key point information is convenient to extract and accurate in extraction, and the effect of determining the specific area is more accurate. A specific region belonging to a specific object is determined based on the detection of the key point information for the current frame image. For example, the specific area can be determined according to the edge contour of the specific area, so that when the key point information is extracted from the current frame image, the key point information located at the edge of the specific area can be extracted. When the specific object is a person and the specific region is a face region of the person, the extracted keypoint information includes keypoint information located at the edge of the face region.
And step S104, loading a preset background image for the foreground image, and overlapping the preset foreground image on a partial area which does not belong to the specific area in the foreground image to obtain an image processed by the current frame.
And loading a foreground image into the preset background image, and overlapping the preset foreground image on a partial area which does not belong to the determined specific area in the foreground image obtained by segmentation, so that the characteristic of the specific area in the obtained image processed by the current frame is kept for displaying, and meanwhile, the partial area which does not belong to the specific area in the foreground image achieves the display effect of double exposure of the preset foreground image and the partial area.
The preset background image and the preset foreground image can be two different pictures, the preset foreground image can be a first preset picture, the preset background image is a second preset picture, and therefore the situation that when the image processed by the current frame is displayed, a partial region which does not belong to a specific region in the foreground image cannot be distinguished from the preset background image is avoided. Or the preset background image and the preset foreground image are one picture with the same display style, such as a third preset picture. When the preset background image and the preset foreground image are third preset pictures with the same display style, different color mixing processing needs to be performed on the third preset pictures to obtain a bright-tone preset foreground image and a dark-tone preset background image respectively.
And step S105, covering the processed image of the current frame with the original image of the current frame to obtain processed video data.
And directly covering the original current frame image with the processed image of the current frame to directly obtain the processed video data. Meanwhile, the recorded user can also directly see the image processed by the current frame.
When the processed image of the current frame is obtained, the processed image of the current frame directly covers the original image of the current frame. The covering is faster, and is generally completed within 1/24 seconds. For the user, since the time of the overlay processing is relatively short, the human eye does not perceive the process of overlaying the original current frame image in the video data. Therefore, when the processed video data is subsequently displayed, the processed video data is displayed in real time while the video data is shot and/or recorded and/or played, and a user cannot feel the display effect of covering the frame image in the video data.
And step S106, displaying the processed video data.
After the processed video data is obtained, the processed video data can be displayed in real time, and a user can directly see the display effect of the processed video data.
According to the video data real-time processing method for realizing double exposure, provided by the invention, the current frame image containing the specific object in the video shot and/or recorded by the image acquisition equipment is obtained in real time; or, acquiring a current frame image containing a specific object in a currently played video in real time, and performing scene segmentation processing on the current frame image to obtain a foreground image of the current frame image for the specific object; detecting key information of the current frame image, and determining a specific area belonging to a specific object; loading a preset background image for the foreground image, and overlapping the preset foreground image on a partial area which does not belong to a specific area in the foreground image to obtain an image after the current frame is processed; covering the processed image of the current frame with the original image of the current frame to obtain processed video data; and displaying the processed video data. After the current frame image is acquired in real time, the foreground image of the specific object is segmented from the current frame image, and the specific area of the specific object is determined according to the detection of key information of the current frame image. On the premise of reserving the specific area, the partial area which does not belong to the specific area in the foreground image is overlapped with the preset foreground image, and the preset background image is loaded, so that the double exposure special effect in the video data is realized. The invention adopts a deep learning method, and realizes scene segmentation processing with high efficiency and high accuracy. And the user does not need to additionally process the recorded video, so that the time of the user is saved, the processed video data can be displayed for the user in real time, and the user can conveniently check the display effect. Meanwhile, the technical level of the user is not limited, and the use by the public is facilitated.
Fig. 2 shows a flowchart of a video data real-time processing method for implementing double exposure according to another embodiment of the present invention. As shown in fig. 2, the method for processing video data in real time to realize double exposure specifically includes the following steps:
step S201, acquiring a current frame image containing a specific object in a video shot and/or recorded by image acquisition equipment in real time; or, the current frame image containing the specific object in the currently played video is acquired in real time.
Step S202, carrying out scene segmentation processing on the current frame image to obtain a foreground image of the current frame image aiming at the specific object.
The above steps refer to the descriptions of steps S101-S102 in the embodiment of fig. 1, and are not described again here.
Step S203, detecting the key point information and the color information of the current frame image, and determining a specific region belonging to a specific object.
In this embodiment, a specific object is a person, and a specific region of the specific object is a face region. The key point information of the current frame image is detected, and the five sense organ regions of the person can be determined by extracting the key point information of eyes, eyebrows, mouths, noses, ears and the like from the current frame image for detection. Meanwhile, color information (skin color) detection can be carried out on the current frame image, and a skin color area of the person is determined. When skin color detection is performed, the skin color detection can be realized by a parameterized model (based on the assumption that skin colors can obey a gaussian probability distribution model), a non-parameterized model (estimation of a skin color histogram), skin color clustering definition (color space threshold segmentation such as YCbCr, HSV, RGB, CIELAB, and the like), and other skin color detection methods, which are not limited herein. From the five sense organ regions and the skin color regions of the person, a specific region belonging to a specific object, that is, a face region of the person can be determined.
And step S204, carrying out corresponding processing on the specific area of the foreground image according to the display style mode of the preset background image and/or the preset foreground image.
According to the display style mode of the preset background image and/or the preset foreground image, the corresponding processing such as buffing, color mixing and the like can be carried out on the specific area of the foreground image. If the background image is preset to be a clear sky background image, a specific area of the foreground image, such as a face area, can be subjected to skin grinding treatment to eliminate spots, flaws, variegates and other flaws of the skin part in the face area, so that the face area is finer and smoother, and the outline is clearer. And adjusting the color, the tone and the like of the specific area according to the color, the tone and the like of the preset background image, so that the specific area is close to or consistent with the preset background image sub-display style mode.
It should be noted that, when processing a specific region of a foreground image, feature information of the specific region needs to be retained, and only the display style mode needs to be adjusted. If the specific area is a face area, the original display feature information of the specific area such as eyes, eyebrows, mouth, nose, ears, and face shape of the face area is retained, and only the treatments such as whitening skin, removing speckle of the face, and brightening skin are performed.
If the display style modes of the preset background image and the preset foreground image are not consistent, the specific area of the foreground image can be appointed to be correspondingly processed according to the display style mode of any image.
Step S205, a preset background image is loaded for the foreground image, and the preset foreground image is superimposed on a partial region of the foreground image that does not belong to the specific region, so as to obtain an image after the current frame is processed.
Loading a foreground image into a preset background image, and if a specific object in the foreground image obtained by segmentation does not belong to a partial area of a determined specific area, such as a person and the specific area is a face area of the person, then loading the partial area which does not belong to the specific area in the foreground image, namely, the partial area, such as hair, clothes and the like, except the face area of the person. And superposing the preset foreground images on the partial areas so as to obtain the image processed by the current frame.
The preset background image and the preset foreground image can use preset pictures, and can also be any frame image in a video. If the preset foreground image is any frame image in the first preset video. And randomly selecting any frame image in the first preset video as a preset foreground image. Further, the preset foreground image may also be changed in real time, and the preset foreground image is changed into another frame image in the first preset video according to different time. The preset background image may be any frame image in the second preset video. And randomly selecting any frame image in the second preset video as a preset background image. Further, the preset background image may also be changed in real time, and the preset background image is changed into another frame image in the second preset video according to different time. The first preset video and the second preset video are videos with different display style modes, namely the display style modes of the preset foreground image and the preset background image are different. Or, the preset foreground image and the preset background image are both any frame image in the third preset video, the preset foreground image and the preset background image may be the same any frame image in the third preset video, and the preset foreground image and the preset background image may also be different any frame image in the third preset video. However, the preset foreground image and the preset background image are both frame images in the third preset video, and the display style modes of the preset foreground image and the preset background image are the same. The frame images in the third preset video are subjected to different color matching processing, for example, the same frame image or different frame images are adjusted to be bright color and color tone as a preset foreground image, and the same frame image or different frame images are adjusted to be dim color and color tone as a preset background image, so that the preset background image and the preset foreground image are distinguished.
Step S206, the processed image of the current frame is covered on the original image of the current frame to obtain processed video data.
And directly covering the original current frame image with the processed image of the current frame to directly obtain the processed video data. Meanwhile, the recorded user can also directly see the image processed by the current frame.
Step S207, the processed video data is displayed.
After the processed video data is obtained, the processed video data can be displayed in real time, and a user can directly see the display effect of the processed video data.
And step S208, uploading the processed video data to a cloud server.
The processed video data can be directly uploaded to a cloud server, and specifically, the processed video data can be uploaded to one or more cloud video platform servers, such as a cloud video platform server for love art, Youkou, fast video and the like, so that the cloud video platform servers can display the video data on a cloud video platform. Or the processed video data can be uploaded to a cloud live broadcast server, and when a user at a live broadcast watching end enters the cloud live broadcast server to watch, the video data can be pushed to a watching user client in real time by the cloud live broadcast server. Or the processed video data can be uploaded to a cloud public server, and when a user pays attention to the public, the cloud public server pushes the video data to a public client; further, the cloud public number server can push video data conforming to user habits to the public number attention client according to the watching habits of users paying attention to the public numbers.
According to the video data real-time processing method for realizing double exposure, provided by the invention, the key point information and the color information of the current frame image are detected, and the specific area belonging to a specific object is determined. And correspondingly processing the specific area of the foreground image according to the display style mode of the preset background image and/or the preset foreground image so as to enable the specific area to be consistent or similar to the display style mode of the preset foreground image and/or the preset background image, and unifying the obtained processed video overall display style mode. And when the specific area of the foreground image is processed, the original display characteristic information of the specific area is reserved, only the display style mode is adjusted, and the obtained processed video is not distorted. Besides the pictures, the preset foreground image and the preset background image can also be frame images in the preset video and change in real time, so that the obtained processed video is more vivid and flexible. The method and the device can directly obtain the processed video, can directly upload the processed video to the cloud server, do not need a user to additionally process the recorded video, save the time of the user, can display the processed video data to the user in real time, and are convenient for the user to check the display effect. Meanwhile, the technical level of the user is not limited, and the use by the public is facilitated.
Fig. 3 shows a functional block diagram of a video data real-time processing apparatus implementing double exposure according to an embodiment of the present invention. As shown in fig. 3, the video data real-time processing apparatus for implementing double exposure includes the following modules:
the acquisition module 301 is adapted to acquire a current frame image containing a specific object in a video shot and/or recorded by an image acquisition device in real time; or, the current frame image containing the specific object in the currently played video is acquired in real time.
In this embodiment, the image capturing device is described by taking a mobile terminal as an example. The obtaining module 301 obtains a current frame image of the mobile terminal camera when recording a video or a current frame image of the mobile terminal camera when shooting a video in real time. Since the present invention processes the specific object, the obtaining module 301 obtains only the current frame image including the specific object when obtaining the current frame image. Besides acquiring the video shot and/or recorded by the image acquisition device in real time, the acquisition module 301 may also acquire a current frame image containing a specific object in the currently played video in real time. The specific object in the present invention may be any object such as a human body, a plant, an animal, etc. in the image, and in the embodiment, the specific object is exemplified by a human body, but is not limited to a human body.
The segmentation module 302 is adapted to perform scene segmentation processing on the current frame image to obtain a foreground image of the current frame image with respect to a specific object.
The current frame image includes a specific object such as a person. The segmentation module 302 performs scene segmentation on the current frame image, and mainly segments a specific object from the current frame image to obtain a foreground image for the specific object, where the foreground image may only include the specific object.
The segmentation module 302 may utilize a deep learning method when performing scene segmentation processing on the current frame image. Deep learning is a method based on characterization learning of data in machine learning. An observation (e.g., an image) may be represented using a number of ways, such as a vector of intensity values for each pixel, or more abstractly as a series of edges, a specially shaped region, etc. Tasks (e.g., face recognition or facial expression recognition) are more easily learned from the examples using some specific representation methods. For example, the segmentation module 302 may perform scene segmentation on the current frame image by using a deep learning human segmentation method to obtain a foreground image containing a human.
The detecting module 303 is adapted to perform key information detection on the current frame image and determine a specific region belonging to a specific object.
The detection module 303 needs to perform key information detection on the current frame image in order to determine the specific area. Specifically, the detecting module 303 may extract key information of a specific region from the current frame image, and perform detection according to the key information. The key information may be key point information, key area information, and/or key line information. The embodiment of the present invention is described by taking the key point information as an example, but the key point information of the present invention is not limited to the key point information. The processing speed and efficiency of determining the specific area according to the key point information can be improved by using the key point information, the specific area can be determined directly according to the key point information, and complex operations such as subsequent calculation, analysis and the like on the key information are not needed. Meanwhile, the key point information is convenient to extract and accurate in extraction, and the effect of determining the specific area is more accurate. The detection module 303 determines a specific region belonging to a specific object according to the detection of the key point information for the current frame image. For example, the specific area may be determined according to an edge contour of the specific area, so that the detecting module 303 may extract the key point information located at the edge of the specific area when extracting the key point information from the current frame image. When the specific object is a person and the specific region is a face region of the person, the keypoint information extracted by the detection module 303 includes keypoint information located at the edge of the face region.
In this embodiment, a specific object is a person, and a specific region of the specific object is a face region. The detection module 303 detects key point information of the current frame image, and may determine the five sense organ regions of the person by extracting key point information of eyes, eyebrows, mouth, nose, ears, and the like from the current frame image. Meanwhile, the detection module 303 may also perform color information (skin color) detection on the current frame image to determine a skin color region of the person. When skin color detection is performed, the detection module 303 may be implemented by a parameterized model (based on the assumption that skin colors can obey a gaussian probability distribution model), a non-parameterized model (estimation of a skin color histogram), skin color cluster definition (color space threshold segmentation such as YCbCr, HSV, RGB, CIELAB, etc.), and other skin color detection methods, which are not limited herein. The detection module 303 may determine a specific region belonging to a specific object based on the five sense organ region and the skin color region of the person, i.e., the detection module 303 determines the face region of the person.
The superimposing module 304 is adapted to load a preset background image for the foreground image, and superimpose the preset foreground image on a partial area of the foreground image that does not belong to the specific area, so as to obtain an image after the current frame is processed.
The superimposition module 304 loads a preset background image into the foreground image, and when the segmented foreground image is a partial region that does not belong to the determined specific region, such as a specific object is a person and the specific region is a face region of the person, the superimposition module loads a foreground image into the partial region that does not belong to the specific region in the foreground image, that is, a region such as hair or clothes except the face region of the person. The superposition module 304 superposes the preset foreground images on the partial areas, so as to obtain the processed image of the current frame. The image processed by the current frame keeps the characteristic display of the specific area, and meanwhile, the partial area which does not belong to the specific area in the foreground image achieves the display effect of double exposure of the preset foreground image and the partial area.
And the covering module 305 is adapted to cover the processed image of the current frame with the original frame image to obtain processed video data.
The overlay module 305 directly overlays the original current frame image with the processed image of the current frame, so as to directly obtain the processed video data. Meanwhile, the recorded user can also directly see the image processed by the current frame.
When the overlay module 304 obtains the processed image of the current frame, the overlay module 305 directly overlays the processed image of the current frame on the original image of the current frame. The covering module 305 covers faster, typically within 1/24 seconds. For the user, since the time of the covering process by the covering module 305 is relatively short, the human eye does not perceive it obviously, i.e. the human eye does not perceive the process of covering the original current frame image in the video data. Thus, when the subsequent display module 306 displays the processed video data, it is equivalent to displaying the processed video data in real time by the display module 306 while shooting and/or recording and/or playing the video data, and the user does not feel the display effect of the frame image in the video data being covered.
A display module 306 adapted to display the processed video data.
After the processed video data is obtained, the display module 306 can display the processed video data in real time, and a user can directly see the display effect of the processed video data.
According to the video data real-time processing device for realizing double exposure, provided by the invention, the current frame image containing the specific object in the video shot and/or recorded by the image acquisition equipment is obtained in real time; or, acquiring a current frame image containing a specific object in a currently played video in real time, and performing scene segmentation processing on the current frame image to obtain a foreground image of the current frame image for the specific object; detecting key information of the current frame image, and determining a specific area belonging to a specific object; loading a preset background image for the foreground image, and overlapping the preset foreground image on a partial area which does not belong to a specific area in the foreground image to obtain an image after the current frame is processed; covering the processed image of the current frame with the original image of the current frame to obtain processed video data; and displaying the processed video data. After the current frame image is acquired in real time, the foreground image of the specific object is segmented from the current frame image, and the specific area of the specific object is determined according to the detection of key information of the current frame image. On the premise of reserving the specific area, the partial area which does not belong to the specific area in the foreground image is overlapped with the preset foreground image, and the preset background image is loaded, so that the double exposure special effect in the video data is realized. The invention adopts a deep learning method, and realizes scene segmentation processing with high efficiency and high accuracy. And the user does not need to additionally process the recorded video, so that the time of the user is saved, the processed video data can be displayed for the user in real time, and the user can conveniently check the display effect. Meanwhile, the technical level of the user is not limited, and the use by the public is facilitated.
Fig. 4 shows a functional block diagram of a video data real-time processing apparatus implementing double exposure according to another embodiment of the present invention. As shown in fig. 4, the difference from fig. 3 is that the video data real-time processing apparatus for implementing double exposure further includes:
the processing module 307 is adapted to perform corresponding processing on a specific area of the foreground image according to a display style mode of a preset background image and/or a preset foreground image.
The processing module 307 may perform corresponding processing such as buffing and color matching on a specific area of the foreground image according to a display style mode of a preset background image and/or a preset foreground image. If the preset background image is a clear sky background image, the processing module 307 may perform skin grinding on a specific region of the foreground image, such as a face region, to eliminate spots, flaws, mottle and other flaws of a skin portion in the face region, so that the face region is finer and smoother, and the contour is clearer. And the processing module 307 adjusts the color, tone, etc. of the specific region according to the color, tone, etc. of the preset background image, so that the specific region is close to or consistent with the preset background image sub-display style mode.
It should be noted that, when processing the specific area of the foreground image, the processing module 307 needs to keep the feature information of the specific area and adjust only the display style mode. If the specific region is a facial region, the processing module 307 retains the original display feature information of the specific region, such as eyes, eyebrows, mouth, nose, ears, and face shape, of the facial region, and only adjusts the skin color to be white, removes the speckle of the face, and brightens the skin color.
If the display style modes of the preset background image and the preset foreground image are not consistent, the processing module 307 may designate to perform corresponding processing on the specific area of the foreground image according to the display style mode of any image.
The first color matching processing module 308 is adapted to perform different color matching processing on the third preset picture to obtain a preset foreground image and a preset background image respectively.
The preset background image and the preset foreground image can be two different pictures, the preset foreground image can be a first preset picture, the preset background image is a second preset picture, and therefore the situation that when the image processed by the current frame is displayed, partial areas, which do not belong to specific areas, in the foreground image cannot be distinguished from the preset background image is avoided. Or the preset background image and the preset foreground image are one picture with the same display style, such as a third preset picture. When the preset background image and the preset foreground image are a third preset picture with the same display style, the first color-mixing processing module 308 performs different color-mixing processing on the third preset picture to obtain a bright-tone preset foreground image and a dark-tone preset background image respectively.
The second color matching processing module 309 is adapted to perform different color matching processing on the frame image in the third preset video to obtain a preset foreground image and a preset background image respectively.
The preset background image and the preset foreground image can be any frame image in the video besides the picture. If the preset foreground image is any frame image in the first preset video. And randomly selecting any frame image in the first preset video as a preset foreground image. Further, the preset foreground image may also be changed in real time, and the preset foreground image is changed into another frame image in the first preset video according to different time. The preset background image may be any frame image in the second preset video. And randomly selecting any frame image in the second preset video as a preset background image. Further, the preset background image may also be changed in real time, and the preset background image is changed into another frame image in the second preset video according to different time. The first preset video and the second preset video are videos with different display style modes, namely the display style modes of the preset foreground image and the preset background image are different. Or, the preset foreground image and the preset background image are both any frame image in the third preset video, the preset foreground image and the preset background image may be the same any frame image in the third preset video, and the preset foreground image and the preset background image may also be different any frame image in the third preset video. However, the preset foreground image and the preset background image are both frame images in the third preset video, and the display style modes of the preset foreground image and the preset background image are the same. The second color-adjusting processing module 309 performs different color-adjusting processes on the frame images in the third preset video, for example, the second color-adjusting processing module 309 adjusts the same frame image or different frame images into bright colors and hues as a preset foreground image, and the second color-adjusting processing module 309 adjusts the same frame image or different frame images into dark colors and hues as a preset background image, so as to distinguish the preset foreground image from the preset background image.
Wherein, the first toning module 308 and/or the second toning module 309 are/is selected to be executed according to the specific implementation situation.
The uploading module 310 is adapted to upload the processed video data to a cloud server.
The uploading module 310 may directly upload the processed video data to a cloud server, and specifically, the uploading module 310 may upload the processed video data to one or more cloud video platform servers, such as a cloud video platform server for an arcade, a super-cool, a fast video, and the like, so that the cloud video platform servers display the video data on a cloud video platform. Or the uploading module 310 may also upload the processed video data to the cloud live broadcast server, and when a user at a live broadcast watching end enters the cloud live broadcast server to watch, the cloud live broadcast server may push the video data to the watching user client in real time. Or the uploading module 310 may also upload the processed video data to a cloud public server, and when a user pays attention to the public, the cloud public server pushes the video data to a public client; further, the cloud public number server can push video data conforming to user habits to the public number attention client according to the watching habits of users paying attention to the public numbers.
According to the video data real-time processing device for realizing double exposure, provided by the invention, the key point information and the color information of the current frame image are detected, and the specific area belonging to a specific object is determined. And correspondingly processing the specific area of the foreground image according to the display style mode of the preset background image and/or the preset foreground image so as to enable the specific area to be consistent or similar to the display style mode of the preset foreground image and/or the preset background image, and unifying the obtained processed video overall display style mode. And when the specific area of the foreground image is processed, the original display characteristic information of the specific area is reserved, only the display style mode is adjusted, and the obtained processed video is not distorted. Besides the pictures, the preset foreground image and the preset background image can also be frame images in the preset video and change in real time, so that the obtained processed video is more vivid and flexible. The method and the device can directly obtain the processed video, can directly upload the processed video to the cloud server, do not need a user to additionally process the recorded video, save the time of the user, can display the processed video data to the user in real time, and are convenient for the user to check the display effect. Meanwhile, the technical level of the user is not limited, and the use by the public is facilitated.
The present application further provides a non-volatile computer storage medium, where at least one executable instruction is stored in the computer storage medium, and the computer executable instruction can execute the video data real-time processing method for implementing double exposure in any of the above method embodiments.
Fig. 5 is a schematic structural diagram of a computing device according to an embodiment of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the computing device.
As shown in fig. 5, the computing device may include: a processor (processor)502, a Communications Interface 504, a memory 506, and a communication bus 508.
Wherein:
the processor 502, communication interface 504, and memory 506 communicate with one another via a communication bus 508.
A communication interface 504 for communicating with network elements of other devices, such as clients or other servers.
The processor 502 is configured to execute the program 510, and may specifically execute the relevant steps in the above-described embodiment of the method for processing video data in real time to realize double exposure.
In particular, program 510 may include program code that includes computer operating instructions.
The processor 502 may be a central processing unit CPU, or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement an embodiment of the present invention. The computing device includes one or more processors, which may be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
And a memory 506 for storing a program 510. The memory 506 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
The program 510 may be specifically configured to enable the processor 502 to execute a video data real-time processing method for realizing double exposure in any of the above-described method embodiments. For specific implementation of each step in the program 510, reference may be made to corresponding steps and corresponding descriptions in units in the foregoing embodiment for implementing real-time processing of video data with double exposure, which are not described herein again. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described devices and modules may refer to the corresponding process descriptions in the foregoing method embodiments, and are not described herein again.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components of an apparatus for real-time processing of video data implementing double exposure in accordance with embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (28)

1. A real-time processing method of video data for realizing double exposure comprises the following steps:
acquiring a current frame image containing a specific object in a video shot and/or recorded by image acquisition equipment in real time; or, acquiring a current frame image containing a specific object in a currently played video in real time;
performing scene segmentation processing on the current frame image to obtain a foreground image of the current frame image aiming at the specific object;
detecting key information of the current frame image, and determining a specific area belonging to a specific object;
loading a preset background image for the foreground image, and overlapping the preset foreground image on a partial area of the foreground image, which does not belong to a specific area, to obtain an image after the current frame is processed;
covering the processed image of the current frame with the original image of the current frame to obtain processed video data;
displaying the processed video data;
before obtaining the processed image of the current frame, the method further includes: and correspondingly processing the specific area of the foreground image according to a display style mode of a preset background image and/or a preset foreground image.
2. The method of claim 1, wherein the detecting key information of the current frame image, and the determining a specific region belonging to a specific object further comprises: and detecting key point information of the current frame image, and determining a specific area belonging to a specific object.
3. The method of claim 1, wherein the detecting key information of the current frame image, and the determining a specific region belonging to a specific object further comprises: and detecting key point information and color information of the current frame image, and determining a specific area belonging to a specific object.
4. The method of claim 1, wherein the processing the particular region of the foreground image accordingly further comprises: and performing buffing and/or color mixing treatment on a specific area of the foreground image.
5. The method of any of claims 1-4, wherein the particular object is a human figure; the specific region of the specific object is a face region;
the detecting the key information of the current frame image and the determining the specific area belonging to the specific object further comprises:
detecting key points of the current frame image, and determining the five sense organ regions of the figure;
carrying out skin color detection on the current frame image to determine a skin color area of the figure;
and determining the face area of the person according to the five sense organ area and the skin color area of the person.
6. The method according to any one of claims 1-5, wherein the preset foreground image is a first preset picture; the preset background image is a second preset picture.
7. The method according to any one of claims 1-5, wherein the method further comprises:
and carrying out different color matching processing on the third preset picture to respectively obtain the preset foreground image and the preset background image.
8. The method according to any one of claims 1-5, wherein the preset foreground image is a frame image in a first preset video; the preset background image is a frame image in a second preset video.
9. The method according to any one of claims 1-5, wherein the method further comprises:
and carrying out different color matching processing on the frame image in the third preset video to respectively obtain the preset foreground image and the preset background image.
10. The method of any of claims 1-9, wherein the displaying the processed video data further comprises: displaying the processed video data in real time;
the method further comprises the following steps: and uploading the processed video data to a cloud server.
11. The method of claim 10, wherein uploading the processed video data to a cloud server further comprises:
and uploading the processed video data to a cloud video platform server so that the cloud video platform server can display the video data on a cloud video platform.
12. The method of claim 10, wherein uploading the processed video data to a cloud server further comprises:
and uploading the processed video data to a cloud live broadcast server so that the cloud live broadcast server can push the video data to a client of a watching user in real time.
13. The method of claim 10, wherein uploading the processed video data to a cloud server further comprises:
and uploading the processed video data to a cloud public server so that the cloud public server pushes the video data to a public attention client.
14. A video data real-time processing apparatus implementing double exposure, comprising:
the acquisition module is suitable for acquiring a current frame image containing a specific object in a video shot and/or recorded by image acquisition equipment in real time; or, acquiring a current frame image containing a specific object in a currently played video in real time;
the segmentation module is suitable for carrying out scene segmentation processing on the current frame image to obtain a foreground image of the current frame image aiming at the specific object;
the detection module is suitable for detecting key information of the current frame image and determining a specific area belonging to a specific object;
the superposition module is suitable for loading a preset background image for the foreground image, and superposing the preset foreground image on a partial area which does not belong to a specific area in the foreground image to obtain an image after the current frame is processed;
the covering module is suitable for covering the processed image of the current frame with the original frame image to obtain processed video data;
the display module is suitable for displaying the processed video data;
wherein the apparatus further comprises:
and the processing module is suitable for correspondingly processing the specific area of the foreground image according to the display style mode of a preset background image and/or a preset foreground image.
15. The apparatus of claim 14, wherein the detection module is further adapted to:
and detecting key point information of the current frame image, and determining a specific area belonging to a specific object.
16. The apparatus of claim 14, wherein the detection module is further adapted to:
and detecting key point information and color information of the current frame image, and determining a specific area belonging to a specific object.
17. The apparatus of claim 14, wherein the processing module is further adapted to:
and performing buffing and/or color mixing treatment on a specific area of the foreground image.
18. The apparatus according to any one of claims 14-17, wherein the specific object is a person; the specific region of the specific object is a face region;
the detection module is further adapted to: detecting key points of the current frame image, and determining the five sense organ regions of the figure; carrying out skin color detection on the current frame image to determine a skin color area of the figure; and determining the face area of the person according to the five sense organ area and the skin color area of the person.
19. The apparatus according to any one of claims 14-18, wherein the preset foreground image is a first preset picture; the preset background image is a second preset picture.
20. The apparatus of any one of claims 14-18, wherein the apparatus further comprises:
and the first color matching processing module is suitable for carrying out different color matching processing on a third preset picture to respectively obtain the preset foreground image and the preset background image.
21. The apparatus according to any one of claims 14-18, wherein the preset foreground image is a frame image in a first preset video; the preset background image is a frame image in a second preset video.
22. The apparatus of any one of claims 14-18, wherein the apparatus further comprises:
and the second color matching processing module is suitable for performing different color matching processing on the frame image in the third preset video to respectively obtain the preset foreground image and the preset background image.
23. The apparatus of any one of claims 14-22, wherein the display module is further adapted to: displaying the processed video data in real time;
the device further comprises:
and the uploading module is suitable for uploading the processed video data to the cloud server.
24. The apparatus of claim 23, wherein the upload module is further adapted to:
and uploading the processed video data to a cloud video platform server so that the cloud video platform server can display the video data on a cloud video platform.
25. The apparatus of claim 23, wherein the upload module is further adapted to:
and uploading the processed video data to a cloud live broadcast server so that the cloud live broadcast server can push the video data to a client of a watching user in real time.
26. The apparatus of claim 23, wherein the upload module is further adapted to:
and uploading the processed video data to a cloud public server so that the cloud public server pushes the video data to a public attention client.
27. A computing device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the video data real-time processing method for realizing double exposure according to any one of claims 1-13.
28. A computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to perform operations corresponding to the method for real-time processing of video data implementing double exposure according to any one of claims 1 to 13.
CN201710864259.4A 2017-09-22 2017-09-22 Video data real-time processing method and device for realizing double exposure and computing equipment Active CN107665482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710864259.4A CN107665482B (en) 2017-09-22 2017-09-22 Video data real-time processing method and device for realizing double exposure and computing equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710864259.4A CN107665482B (en) 2017-09-22 2017-09-22 Video data real-time processing method and device for realizing double exposure and computing equipment

Publications (2)

Publication Number Publication Date
CN107665482A CN107665482A (en) 2018-02-06
CN107665482B true CN107665482B (en) 2021-07-23

Family

ID=61098126

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710864259.4A Active CN107665482B (en) 2017-09-22 2017-09-22 Video data real-time processing method and device for realizing double exposure and computing equipment

Country Status (1)

Country Link
CN (1) CN107665482B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108491818B (en) * 2018-03-30 2019-07-05 北京三快在线科技有限公司 Detection method, device and the electronic equipment of target object
CN109961453B (en) * 2018-10-15 2021-03-12 华为技术有限公司 Image processing method, device and equipment
CN113112505B (en) 2018-10-15 2022-04-29 华为技术有限公司 Image processing method, device and equipment
CN110084154B (en) * 2019-04-12 2021-09-17 北京字节跳动网络技术有限公司 Method and device for rendering image, electronic equipment and computer readable storage medium
CN110266970A (en) * 2019-05-31 2019-09-20 上海萌鱼网络科技有限公司 A kind of short video creating method and system
CN110363172A (en) * 2019-07-22 2019-10-22 曲靖正则软件开发有限公司 A kind of method for processing video frequency, device, electronic equipment and readable storage medium storing program for executing
CN113132795A (en) 2019-12-30 2021-07-16 北京字节跳动网络技术有限公司 Image processing method and device
CN112700422A (en) * 2021-01-06 2021-04-23 百果园技术(新加坡)有限公司 Overexposure point detection method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101443791A (en) * 2006-05-03 2009-05-27 快图影像有限公司 Improved foreground/background separation in digitl images
CN101493930A (en) * 2008-01-21 2009-07-29 保定市天河电子技术有限公司 Loading exchanging method and transmission exchanging method
CN104732506A (en) * 2015-03-27 2015-06-24 浙江大学 Character picture color style converting method based on face semantic analysis
CN106447642A (en) * 2016-08-31 2017-02-22 北京云图微动科技有限公司 Double exposure fusion method and device for image
US9602735B2 (en) * 2012-12-19 2017-03-21 International Business Machines Corporation Digital imaging exposure metering system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100515042C (en) * 2007-03-29 2009-07-15 上海交通大学 Multiple exposure image intensifying method
CN102075688B (en) * 2010-12-28 2012-07-25 青岛海信网络科技股份有限公司 Wide dynamic processing method for single-frame double-exposure image
CN103293825B (en) * 2013-06-26 2014-11-26 深圳市中兴移动通信有限公司 Multiple exposure method and device
CN107146204A (en) * 2017-03-20 2017-09-08 深圳市金立通信设备有限公司 A kind of U.S. face method of image and terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101443791A (en) * 2006-05-03 2009-05-27 快图影像有限公司 Improved foreground/background separation in digitl images
CN101493930A (en) * 2008-01-21 2009-07-29 保定市天河电子技术有限公司 Loading exchanging method and transmission exchanging method
US9602735B2 (en) * 2012-12-19 2017-03-21 International Business Machines Corporation Digital imaging exposure metering system
CN104732506A (en) * 2015-03-27 2015-06-24 浙江大学 Character picture color style converting method based on face semantic analysis
CN106447642A (en) * 2016-08-31 2017-02-22 北京云图微动科技有限公司 Double exposure fusion method and device for image

Also Published As

Publication number Publication date
CN107665482A (en) 2018-02-06

Similar Documents

Publication Publication Date Title
CN107665482B (en) Video data real-time processing method and device for realizing double exposure and computing equipment
US12056883B2 (en) Method for testing skin texture, method for classifying skin texture and device for testing skin texture
CN107771336B (en) Feature detection and masking in images based on color distribution
CN107507155B (en) Video segmentation result edge optimization real-time processing method and device and computing equipment
CN107172354B (en) Video processing method and device, electronic equipment and storage medium
US20150091938A1 (en) System and method for changing hair color in digital images
CN108830892B (en) Face image processing method and device, electronic equipment and computer readable storage medium
CN106682620A (en) Human face image acquisition method and device
CN107705279B (en) Image data real-time processing method and device for realizing double exposure and computing equipment
CN107959798B (en) Video data real-time processing method and device and computing equipment
CN108111911B (en) Video data real-time processing method and device based on self-adaptive tracking frame segmentation
CN107547803B (en) Video segmentation result edge optimization processing method and device and computing equipment
CN107610149B (en) Image segmentation result edge optimization processing method and device and computing equipment
CN108171716B (en) Video character decorating method and device based on self-adaptive tracking frame segmentation
CN107564085B (en) Image warping processing method and device, computing equipment and computer storage medium
CN107851309A (en) A kind of image enchancing method and device
CN107743263B (en) Video data real-time processing method and device and computing equipment
CN107808372B (en) Image crossing processing method and device, computing equipment and computer storage medium
Lee et al. Correction of the overexposed region in digital color image
CN107766803B (en) Video character decorating method and device based on scene segmentation and computing equipment
US20220398704A1 (en) Intelligent Portrait Photography Enhancement System
CN116320597A (en) Live image frame processing method, device, equipment, readable storage medium and product
CN108010038B (en) Live-broadcast dress decorating method and device based on self-adaptive threshold segmentation
CN113379623B (en) Image processing method, device, electronic equipment and storage medium
US20190205689A1 (en) Method and device for processing image, electronic device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant