CN110084204A - Image processing method, device and electronic equipment based on target object posture - Google Patents
Image processing method, device and electronic equipment based on target object posture Download PDFInfo
- Publication number
- CN110084204A CN110084204A CN201910357692.8A CN201910357692A CN110084204A CN 110084204 A CN110084204 A CN 110084204A CN 201910357692 A CN201910357692 A CN 201910357692A CN 110084204 A CN110084204 A CN 110084204A
- Authority
- CN
- China
- Prior art keywords
- target object
- image
- image processing
- pose
- posture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 28
- 238000012545 processing Methods 0.000 claims abstract description 164
- 239000000463 material Substances 0.000 claims abstract description 33
- 230000004044 response Effects 0.000 claims abstract description 28
- 230000011218 segmentation Effects 0.000 claims abstract description 18
- 238000000034 method Methods 0.000 claims description 35
- 238000001514 detection method Methods 0.000 claims description 15
- 238000009499 grossing Methods 0.000 claims description 8
- 230000000694 effects Effects 0.000 description 33
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 9
- 238000004590 computer program Methods 0.000 description 7
- 238000013136 deep learning model Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 244000249914 Hemigraphis reptans Species 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 235000009781 Myrtillocactus geometrizans Nutrition 0.000 description 1
- 240000009125 Myrtillocactus geometrizans Species 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000002485 combustion reaction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/11—Hand-related biometrics; Hand pose recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/117—Biometrics derived from hands
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Human Computer Interaction (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The present disclosure discloses a kind of image processing method based on target object posture, device and electronic equipments.Wherein, the image processing method based on target object posture includes: to obtain original image from image source, includes target object in the original image;Divide the target object from the original image to generate segmentation object object;The segmentation object object is divided at least two targeted object regions;Detect the posture of the target object;In response to detecting that the target object is in the first posture, first image procossing is carried out to the target object, wherein the first image processing is generates the first image on the predetermined position of the target object, and handles at least two targeted object region respectively using different materials.The disclosure is logical to be divided at least two targeted object regions for target object and carries out image procossing to targeted object region, is solved special efficacy in the prior art and is lacked details, not true enough technical problem.
Description
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image processing method and apparatus based on a target object posture, and an electronic device.
Background
With the development of computer technology, the application range of the intelligent terminal is widely improved, for example, the intelligent terminal can listen to music, play games, chat on internet, take pictures and the like. For the photographing technology of the intelligent terminal, the photographing pixels of the intelligent terminal reach more than ten million pixels, and the intelligent terminal has higher definition and the photographing effect comparable to that of a professional camera.
At present, when an intelligent terminal is used for photographing, not only can photographing effects of traditional functions be realized by using photographing software built in when the intelligent terminal leaves a factory, but also photographing effects with additional functions or special effects can be realized by downloading an Application program (APP for short) from a network end. The user's actions are sometimes recognized before the effect is added.
In the prior art, some image special effects following a target object are generally performed by using a relatively rough positioning method, such as an outer contour or an outer frame following the target object, so that the target object can only be roughly followed, and more detailed special effect rendering cannot be performed, so that the special effects lack the sense of reality.
Disclosure of Invention
According to one aspect of the present disclosure, the following technical solutions are provided:
an image processing method based on target object pose, comprising: acquiring an original image from an image source, wherein the original image comprises a target object; segmenting the target object from the original image to generate a segmented target object; dividing the segmented target object into at least two target object regions; detecting a pose of the target object; in response to detecting that the target object is in the first posture, performing first image processing on the target object, wherein the first image processing is used for generating a first image at a preset position of the target object, and processing the at least two target object areas respectively by using different materials.
Further, after the performing the first image processing on the target object in response to detecting that the target object is in the first posture, the method further includes: in response to detecting that the target object is in a second pose, switching the first image processing to a second image processing and performing the second image processing on the target object, wherein the second image processing is to generate a second image at a predetermined position of the target object.
Further, after the switching the first image processing to the second image processing in response to detecting that the target object is in the second posture, the method further includes: in response to detecting again that the target object is in the second pose, switching the second image processing to the first image processing.
Further, segmenting the target object from the original image to generate a segmented target object, comprising: detecting a target object in the original image and generating an outer frame of the target object; and extracting the image in the external frame and extracting the target object from the image in the external frame.
Further, the dividing the segmentation target object into at least two target object regions includes: carrying out gray level processing on the image of the segmentation target object to obtain a gray level image of the target object; sorting pixel values in the gray scale map; and intercepting the pixel values according to a plurality of preset proportional ranges to form at least two target object areas.
Further, the intercepting the pixel values according to a plurality of preset scale ranges to form at least two target object regions includes: intercepting the pixel values according to a plurality of preset proportional ranges; performing Gaussian smoothing on pixel values in at least one proportion range; and forming at least two target object areas by taking the plurality of scale ranges as boundaries.
Further, the detecting the posture of the target object includes: inputting the segmented target object into a target object posture classifier; and determining the posture of the target object according to the output result of the target object posture classifier.
Further, the performing, in response to detecting that the target object is in the first posture, the first image processing on the target object includes: when the target object is detected to be in the first posture, determining a first position according to the key point of the target object; generating a first image at the first location, the first image being a sequence of frames comprising a plurality of image frames; acquiring at least two different materials corresponding to the at least two target object areas respectively, wherein the number of the materials is the same as that of the target object areas, and the materials are in one-to-one correspondence with the target object areas; processing the at least two target object regions using the at least two different materials, respectively.
Further, the switching the first image processing to a second image processing in response to detecting that the target object is in a second pose includes: when the target object is detected to be in the second posture, determining a second position according to the key point of the target object; generating a second image at the second location, the second image being a sequence of frames comprising a plurality of image frames.
Further, in response to detecting again that the target object is in the second pose, switching the second image processing to the first image processing includes: when the target object in the current frame of the original image is in the second posture and the target object in the previous frame of the current frame is in the non-second posture, judging that the target is detected to be in the second posture again; switching the second image processing to the first image processing.
According to another aspect of the present disclosure, the following technical solutions are also provided:
an image processing apparatus based on a target object pose, comprising:
the system comprises an original image acquisition module, a target object acquisition module and a target object acquisition module, wherein the original image acquisition module is used for acquiring an original image from an image source, and the original image comprises the target object; a target object segmentation module for segmenting the target object from the original image to generate a segmented target object; a region dividing module for dividing the segmentation target object into at least two target object regions; the gesture detection module is used for detecting the gesture of the target object; the first image processing module is used for responding to the detection that the target object is in the first posture, performing first image processing on the target object, wherein the first image processing is used for generating a first image on a preset position of the target object, and processing the at least two target object areas by using different materials respectively.
Further, the apparatus further includes: and the second image processing module is used for responding to the detection that the target object is in the second posture, switching the first image processing into second image processing and carrying out second image processing on the target object, wherein the second image processing is used for generating a second image on a preset position of the target object.
Further, the apparatus further comprises: and the switching module is used for responding to the second posture of the target object detected again and switching the second image processing into the first image processing.
Further, the target object segmentation module further includes: the external frame generating module is used for detecting a target object in the original image and generating an external frame of the target object; and the target object extraction module is used for extracting the image in the external frame and extracting the target object from the image in the external frame.
Further, the area dividing module further includes: the gray-scale image processing module is used for carrying out gray-scale processing on the image of the segmentation target object to obtain a gray-scale image of the target object; the sorting module is used for sorting the pixel values in the gray level image; and the target object area generating module is used for intercepting the pixel values according to a plurality of preset proportional ranges to form at least two target object areas.
Further, the target object region generating module is further configured to: intercepting the pixel values according to a plurality of preset proportional ranges; performing Gaussian smoothing on pixel values in at least one proportion range; and forming at least two target object areas by taking the plurality of scale ranges as boundaries.
Further, the gesture detection module further includes: an input module for inputting the segmented target object into a target object pose classifier; and the gesture determining module is used for determining the gesture of the target object according to the output result of the target object gesture classifier.
Further, the first image processing module further includes: the first position determining module is used for determining a first position according to the key point of the target object when the target object is detected to be in a first posture; a first image processing module for generating a first image at the first position, the first image being a frame sequence comprising a plurality of image frames; the material acquisition module is used for acquiring at least two different materials which respectively correspond to the at least two target object areas, wherein the number of the materials is the same as that of the target object areas, and the materials correspond to the target object areas one by one; and the target object area processing module is used for respectively processing the at least two target object areas by using the at least two different materials.
Further, the second image processing module further includes: the second position determining module is used for determining a second position according to the key point of the target object when the target object is detected to be in a second posture; a second image generation module for generating a second image at the second location, the second image being a frame sequence comprising a plurality of image frames.
Further, the switching module is further configured to: when the target object in the current frame of the original image is in the second posture and the target object in the previous frame of the current frame is in the non-second posture, judging again
According to still another aspect of the present disclosure, there is also provided the following technical solution:
an electronic device, comprising: a memory for storing non-transitory computer readable instructions; and a processor for executing the computer readable instructions, so that the processor realizes the steps of any one of the above image processing methods based on the target object posture when executing the computer readable instructions.
According to still another aspect of the present disclosure, there is also provided the following technical solution:
a computer readable storage medium storing non-transitory computer readable instructions which, when executed by a computer, cause the computer to perform the steps of any of the methods described above.
The disclosure discloses an image processing method and device based on target object posture and electronic equipment. The image processing method based on the target object posture comprises the following steps: acquiring an original image from an image source, wherein the original image comprises a target object; segmenting the target object from the original image to generate a segmented target object; dividing the segmented target object into at least two target object regions; detecting a pose of the target object; in response to detecting that the target object is in the first posture, performing first image processing on the target object, wherein the first image processing is used for generating a first image at a preset position of the target object, and processing the at least two target object areas respectively by using different materials. The method divides the target object into at least two target object areas and carries out image processing on the target object areas, and solves the technical problems that the special effect in the prior art is lack of details and not real enough.
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical means of the present disclosure, the present disclosure may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
Drawings
FIG. 1 is a schematic flow diagram of a method for image processing based on a pose of a target object according to one embodiment of the present disclosure;
FIG. 2 is a schematic flow chart diagram illustrating a further step S102 of the method for image processing based on the pose of the target object according to an embodiment of the present disclosure;
FIG. 3 is a schematic flowchart of a further step S103 of the method for image processing based on the pose of the target object according to an embodiment of the present disclosure;
FIG. 4 is a schematic flow chart diagram of a further method for image processing based on a pose of a target object according to one embodiment of the present disclosure;
FIG. 5 is a schematic structural diagram of an image processing apparatus based on a target object pose according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
The embodiment of the disclosure provides an image processing method based on target object posture. The image processing method based on the target object posture provided by the embodiment can be executed by a computing device, the computing device can be implemented as software, or implemented as a combination of software and hardware, and the computing device can be integrally arranged in a server, a terminal device and the like. As shown in fig. 1, the image processing method based on the target object posture mainly includes the following steps S101 to S105. Wherein:
step S101: acquiring an original image from an image source, wherein the original image comprises a target object;
in the present disclosure, the image source is a local storage space or a network storage space, the acquiring of the original image from the image source includes acquiring the original image from the local storage space or acquiring the original image from the network storage space, where the original image is acquired, a storage address of the original image is preferably required to be acquired, and then the original image is acquired from the storage address, where the original image includes multiple frames of images, and the original image may be a video or a picture with a dynamic effect, as long as the image with multiple frames is the original image in the present disclosure.
In the present disclosure, the image source may be an image sensor, and acquiring a raw image from the image source includes acquiring a raw image from the image sensor. The image sensor refers to various devices capable of acquiring images, and typical image sensors are video cameras, cameras and the like. In this embodiment, the image sensor may be a camera on a mobile terminal, such as a front-facing or rear-facing camera on a smart phone, and an original image acquired by the camera may be directly displayed on a display screen of the smart phone, in this step, a video captured by the image sensor is obtained for further identifying a target object in the image.
In the present disclosure, the original image includes a target object, which may be one or more specific objects, such as human hands, human faces, human bodies, various animals, and the like.
In a typical application, a user uses a camera of a mobile terminal to shoot a body image of the user, the mobile terminal displays the shot image on a display device of the mobile terminal, at the moment, the user can make various actions in front of the camera, and the mobile terminal detects the posture of a hand in the body image collected by the camera. The identification and detection of the target object will be described in the next several steps, and will not be described in detail here.
Step S102: segmenting the target object from the original image to generate a segmented target object;
as shown in fig. 2, in the present disclosure, the segmenting the target object from the original image to generate a segmented target object may include the steps of:
step S201: detecting a target object in the original image and generating an outer frame of the target object;
step S202: and extracting the image in the external frame and extracting the target object from the image in the external frame.
In step S201, a target object detection method may be used to extract image features in an image for each frame in an original image to form a feature image, divide the feature image into a plurality of squares, set a predetermined number of bounding boxes for each square, determine which bounding box contains the most features of a target object, and output the bounding box as the bounding box of the target object. Typically, the target object is a human hand, and an outline of the human hand is generated by detecting the human hand. It is understood that other methods may be used to detect the human hand and form the outer frame, and the description is omitted here.
In step S202, the image in the outline frame is extracted, the target object is further identified separately, and the target object is segmented.
Taking a human hand as an example, when the human hand is segmented, the position of the human hand can be positioned by using the color characteristics, and the human hand can be segmented from the background. Specifically, in the conventional method, color information of an image and position information of the color information may be acquired using an image sensor; comparing the color information with preset hand color information; identifying first color information on a human hand, wherein the error between the first color information and the preset human hand color information is smaller than a first threshold value; and forming the outline of the human hand by using the position information of the first color information. Preferably, in order to avoid interference of the ambient brightness to the color information, image data of an RGB color space acquired by the image sensor may be mapped to an HSV color space, information in the HSV color space is used as contrast information, and preferably, a hue value in the HSV color space is used as color information, so that the hue information is minimally affected by brightness, and the interference of the brightness can be well filtered. Specifically, a deep learning method can be used, a deep learning model trained in advance is used, and since the image area of the human hand is reduced in advance in step S201, the detection time of the deep learning model can be greatly reduced, the deep learning model is trained to be the probability that each pixel point in the output image is a human hand pixel point, specifically, the deep learning model can be a convolutional neural network, the image in the external frame is abstracted into a characteristic image through multilayer convolution, each pixel point in each characteristic image is classified through a full connection layer, whether the pixel point is a human hand pixel point is judged, and finally, the human hand image segmented out from the original image is obtained.
In the present disclosure, the segmenting the target object may further include detecting a keypoint on the target object while or after segmenting the target object. The detection of the key points can generally use a deep learning model, which is trained by a training set marked with key points in advance, so that after the segmented target object is input into the deep learning model, the deep learning model can regress the positions of the key points on the segmented target object. The location of the keypoints can be used for localization in a later step.
It is understood that there are many methods for segmenting the target object, and further optimization can be performed on different target objects, which is not within the scope of the present disclosure and is not described in detail, and any method that can segment the target object from the original image can be applied to the present disclosure.
Step S103: dividing the segmented target object into at least two target object regions;
in the present disclosure, in order to make the image special effects appear more realistic, the target object is divided into a plurality of regions, and an additional special effect or a related special effect of the image special effects is added in each region, and before adding the special effects, the target object needs to be divided into regions. As shown in fig. 3, in the present disclosure, the dividing the segmentation target object into at least two target object regions may include:
s301: carrying out gray level processing on the image of the segmentation target object to obtain a gray level image of the target object;
s302: sorting pixel values in the gray scale map;
s303: and intercepting the pixel values according to a plurality of preset proportional ranges to form at least two target object areas.
In the above steps, firstly, performing gray processing on the image of the segmented target object to obtain a gray map of the target object, where there are many methods for gray processing, typically, the average value is obtained by adding the pixel values of the pixels of the image in each of three channels of the RGB image, where the average value may be an absolute average value or a weighted average value, and the average value is used as the pixel value of the pixel in the gray map; or taking the maximum value of the pixel values of the pixels of the image in the three channels of the RGB image as the pixel value of the pixel in the gray-scale image. Other gray processing methods are not described in detail, and any gray processing method can be applied to the technical scheme of the disclosure.
After the grayscale map is obtained, the pixel values in the grayscale map are sorted. Typically, the ordering can be from small to large, taking the lower 3 × 3 grayscale as an example:
in descending order of magnitude, a sequence of pixel values 50,60,70,120,150,170,200,210,220 is obtained, the corresponding pixel ordering being a1,a2,a3,a6,a5,a4,a7,a8,a9
After the pixel values in the gray scale image are sequenced, the pixel values are intercepted according to a plurality of preset proportion ranges to form at least two target object areas. Specifically, the multiple proportion ranges may be set as: 0 to 0.33,0.33 to 0.66, 0.66 to 1, that is, the sorted pixel values are divided into three parts by dividing each 1/3 into three parts, taking the gray scale map of the upper 3 x 3 as an example, the pixel values are divided into three parts (50,60,70), (120,150,170), (200,210,220), and the corresponding pixels are also divided into three parts (a)1,a2,a3),(a6,a5,a4),(a7,a8,a9) Correspondingly, the target object is divided into three target object areas according to the three parts. For a human hand, due to illumination, the human hand can be divided into three parts, one part is a shielding part, the pixel value is relatively low, the other part is a half shielding part, the pixel value is located in the middle, the third part is a strong light part, the pixel value is the best, and the preferable proportion range is as follows: 0 to 0.1, 0.1 to 0.7, 0.7 to 1. The above ratio may be set arbitrarily, or may be set according to the attribute of the target object, and the like, and is not limited herein.
In one embodiment, the step S303 may further include: intercepting the pixel values according to a plurality of preset proportional ranges; performing Gaussian smoothing on pixel values in at least one proportion range; and forming at least two target object areas by taking the plurality of scale ranges as boundaries. After the pixel values are intercepted according to the proportion, the boundary of the region may have a lot of noises to influence the effect, so that the pixel values in at least one proportion range can be subjected to Gaussian smoothing to reduce the noises, the region after the Gaussian smoothing is taken as a target object region, the Gaussian smoothing method and the used parameters are not repeated any more, and the method and the parameters can be set at will according to the needs of actual situations.
Step S104: detecting a pose of the target object;
in the present disclosure, the posture of the target object may refer to various attributes of the target object, such as color, shape, motion track, and the like, and a specific posture of the target object to be detected may be preset as needed, and the posture is used as a trigger condition of image processing in the present disclosure.
It is understood that this step S104 may be performed in parallel with step S103.
In the present disclosure, the detecting the gesture of the target object may include: inputting the segmented target object into a target object posture classifier; and determining the posture of the target object according to the output result of the target object posture classifier.
Take the target object as a human hand as an example. In this step, the human hand divided by step S102 may be input into the classification model to recognize the gesture of the human hand. In step S104, inputting the human hand segmented from the current frame of the original image into an image classifier; and determining the category of the gesture according to the classification result of the image classifier. Typically, the network can only recognize a predetermined gesture, the image of the human hand is input into an input layer of the convolutional neural network, a feature map is output to an image classifier after the convolution of the convolutional layer for multiple times, the image classifier outputs the probability that the human hand image is the predetermined gesture, and when the probability is greater than a threshold value, the human hand in the human hand image can be recognized as the predetermined gesture. It is to be understood that, the above-mentioned identification of the gesture of the human hand may also use other methods, and any method that meets the requirement of real-time performance may be applied to the step S104 of the present disclosure to identify the gesture of the human hand.
Step S105: in response to detecting that the target object is in the first posture, performing first image processing on the target object, wherein the first image processing is used for generating a first image at a preset position of the target object, and processing the at least two target object areas respectively by using different materials.
Wherein the performing of the first image processing on the target object in response to detecting that the target object is in the first pose may include: when the target object is detected to be in the first posture, determining a first position according to the key point of the target object; generating a first image at the first location, the first image being a sequence of frames comprising a plurality of image frames; acquiring at least two different materials corresponding to the at least two target object areas respectively, wherein the number of the materials is the same as that of the target object areas, and the materials are in one-to-one correspondence with the target object areas; processing the at least two target object regions using the at least two different materials, respectively.
Specifically, the target object is a hand, and when a gesture that the hand is unfolded from the palm is detected, a first position is determined according to a preselected key point on the palm, where the position is a rendering position of a first image, typically, the key point on the palm of the hand can be preset as the first position, and then an image of red flame is generated at the first position, where the image of flame is a frame sequence including a plurality of image frames to show a flame burning effect, and at this time, an effect of red flame which is continuously burned at the palm is formed; meanwhile, in step S103, the palm is divided into 3 regions: the palm is positioned above the palm, the palm is positioned below the palm, and the palm is positioned between the palm and the palm; different processing materials corresponding to different brightness ranges are preset, for example, a bright yellow color card is used for processing pixels in the area with the highest brightness, an orange yellow color card is used for processing pixels in the area with the middle brightness, and a dark yellow color card is used for processing pixels in the area with the lowest brightness, so that the effect that the position closer to red flame is brighter can be formed, the flame special effect is close to the shadow effect formed by actual flame combustion, and the effect is more vivid. It can be understood that, in an embodiment of the present disclosure, since the pixel values of the pixels in the grayscale image are already used when the regions are divided, and the brightness is already expressed to a certain extent, in this embodiment, there is also a processing manner that a color card of the same color can be used on the three regions, at this time, the RGB image of the palm is converted into the HSL space, the value of the L component, that is, the brightness component, is kept unchanged in the HSL space, and the HS components are assigned as the HS components of the color card, so that the three regions of the palm can be rendered into color card colors of different brightness while the color of the color card is kept, and the above-mentioned light and shadow effect can also be formed. It is to be understood that any manner may be used for the generation of the first image and the processing of the target object region, and generally, the processing of the target object region is associated with the first image, such as a shadow effect generated by illumination of light, a hydrological effect generated by fluctuation of water, and the like, and an effect closer to a real scene or other special effects may be achieved by configuring different first images and different effects of multiple regions corresponding to the first images, which is not described herein again. It is understood that the determination of the first position may be directly using the positions of the key points, or may be calculated as a new position using the positions of several key points, which is not specifically limited herein.
As shown in fig. 4, after step S105, the method may further include the steps of:
step S401: in response to detecting that the target object is in a second pose, switching the first image processing to a second image processing and performing the second image processing on the target object, wherein the second image processing is to generate a second image at a predetermined position of the target object. Wherein the switching the first image processing to a second image processing and the second image processing of the target object in response to detecting that the target object is in a second pose comprises: when the target object is detected to be in the second posture, determining a second position according to the key point of the target object; generating a second image at the second location, the second image being a sequence of frames comprising a plurality of image frames.
Specifically, when the target object is a human hand, and when a hand gesture that the human hand is a fist is detected, the first image processing is switched to the second image processing, in a specific embodiment, the second image processing may be a sequential frame that generates blue flames, at this time, the palm is not divided into regions, that is, in the second image processing, the target object region is not processed, and at this time, the target object maintains an initial state. The second image processing to generate the second image at the predetermined position of the target object may be the same as or different from the manner of generating the first image, and is not limited herein. It is to be understood that the second processing may also be added to the processing of the target object region, but in a different manner than in the first image processing.
It can be understood that, in this step, the second posture is a switching triggering posture, and when the target object is detected to be in the second posture, the current image processing mode is switched to the next image processing mode; when this mode is used, when the image processing mode is only the first image processing and the second image processing, after the switching the first image processing to the second image processing in response to the detection that the target object is in the second posture, the method further includes: in response to detecting again that the target object is in the second pose, switching the second image processing to the first image processing. That is, the first posture is an initial image effect triggering posture, when the target object is detected to be in the first posture, the default first image processing is triggered, then the second posture is used as a switching triggering condition of the image processing mode, each time the second posture is detected, different image processing modes are switched circularly, and the first posture has no triggering effect any more.
Wherein switching the second image processing to first image processing in response to detecting the target object in the second pose again comprises: when the target object in the current frame of the original image is in the second posture and the target object in the previous frame of the current frame is in the non-second posture, judging that the target is detected to be in the second posture again; switching the second image processing to the first image processing. Specifically, after the fist making gesture has been detected, the current frame is the fist making gesture and the previous frame is not the fist making gesture, and it is determined that the fist making gesture is detected again. If the two continuous frames are both in the fist making posture, the image processing mode is not switched to indicate the current fist making posture.
Or the first pose and the second pose may correspond to a fixed image processing mode, for example, the first pose corresponds to a first image processing mode and the second pose corresponds to a second image processing mode, and when the target object is detected to be in the second pose, the second image processing mode is switched to no matter which image processing mode the current image processing mode is.
The disclosure discloses an image processing method and device based on target object posture and electronic equipment. The image processing method based on the target object posture comprises the following steps: acquiring an original image from an image source, wherein the original image comprises a target object; segmenting the target object from the original image to generate a segmented target object; dividing the segmented target object into at least two target object regions; detecting a pose of the target object; in response to detecting that the target object is in the first posture, performing first image processing on the target object, wherein the first image processing is used for generating a first image at a preset position of the target object, and processing the at least two target object areas respectively by using different materials. The method divides the target object into at least two target object areas and carries out image processing on the target object areas, and solves the technical problems that the special effect in the prior art is lack of details and not real enough.
In the above, although the steps in the above method embodiments are described in the above sequence, it should be clear to those skilled in the art that the steps in the embodiments of the present disclosure are not necessarily performed in the above sequence, and may also be performed in other sequences such as reverse, parallel, and cross, and further, on the basis of the above steps, other steps may also be added by those skilled in the art, and these obvious modifications or equivalents should also be included in the protection scope of the present disclosure, and are not described herein again.
For convenience of description, only the relevant parts of the embodiments of the present disclosure are shown, and details of the specific techniques are not disclosed, please refer to the embodiments of the method of the present disclosure.
The embodiment of the disclosure provides an image processing device. The apparatus may perform the steps described in the above embodiments of the target object pose-based image processing method. As shown in fig. 5, the apparatus 500 mainly includes: an original image acquisition module 501, a target object segmentation module 502, a region division module 503, a posture detection module 504 and a first image processing module 505. Wherein,
an original image obtaining module 501, configured to obtain an original image from an image source, where the original image includes a target object;
a target object segmentation module 502 for segmenting the target object from the original image to generate a segmented target object;
a region dividing module 503, configured to divide the segmentation target object into at least two target object regions;
a gesture detection module 504 for detecting a gesture of the target object;
the first image processing module 505 is configured to, in response to detecting that the target object is in the first posture, perform first image processing on the target object, where the first image processing is to generate a first image at a predetermined position of the target object, and process the at least two target object regions using different materials, respectively.
Further, the apparatus 500 further includes:
and the second image processing module is used for responding to the detection that the target object is in the second posture, switching the first image processing into second image processing and carrying out second image processing on the target object, wherein the second image processing is used for generating a second image on a preset position of the target object.
Further, the apparatus 500 further includes:
and the switching module is used for responding to the second posture of the target object detected again and switching the second image processing into the first image processing.
Further, the target object segmentation module 502 further includes:
the external frame generating module is used for detecting a target object in the original image and generating an external frame of the target object;
and the target object extraction module is used for extracting the image in the external frame and extracting the target object from the image in the external frame.
Further, the region dividing module 503 further includes:
the gray-scale image processing module is used for carrying out gray-scale processing on the image of the segmentation target object to obtain a gray-scale image of the target object;
the sorting module is used for sorting the pixel values in the gray level image;
and the target object area generating module is used for intercepting the pixel values according to a plurality of preset proportional ranges to form at least two target object areas.
Further, the target object region generating module is further configured to:
intercepting the pixel values according to a plurality of preset proportional ranges; performing Gaussian smoothing on pixel values in at least one proportion range; and forming at least two target object areas by taking the plurality of scale ranges as boundaries.
Further, the gesture detection module 504 further includes:
an input module for inputting the segmented target object into a target object pose classifier;
and the gesture determining module is used for determining the gesture of the target object according to the output result of the target object gesture classifier.
Further, the first image processing module 505 further includes:
the first position determining module is used for determining a first position according to the key point of the target object when the target object is detected to be in a first posture;
a first image processing module for generating a first image at the first position, the first image being a frame sequence comprising a plurality of image frames;
the material acquisition module is used for acquiring at least two different materials which respectively correspond to the at least two target object areas, wherein the number of the materials is the same as that of the target object areas, and the materials correspond to the target object areas one by one;
and the target object area processing module is used for respectively processing the at least two target object areas by using the at least two different materials.
Further, the second image processing module further includes:
the second position determining module is used for determining a second position according to the key point of the target object when the target object is detected to be in a second posture;
a second image generation module for generating a second image at the second location, the second image being a frame sequence comprising a plurality of image frames.
Further, the switching module is further configured to:
when the target object in the current frame of the original image is in the second posture and the target object in the previous frame of the current frame is in the non-second posture, judging that the target is detected to be in the second posture again; switching the second image processing to the first image processing.
The apparatus shown in fig. 5 can perform the method of the embodiment shown in fig. 1-4, and the detailed description of this embodiment can refer to the related description of the embodiment shown in fig. 1-4. The implementation process and technical effect of the technical solution are described in the embodiments shown in fig. 1 to 4, and are not described herein again.
Referring now to FIG. 6, a block diagram of an electronic device 600 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may be separate and not incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring an original image from an image source, wherein the original image comprises a target object; segmenting the target object from the original image to generate a segmented target object; dividing the segmented target object into at least two target object regions; detecting a pose of the target object; in response to detecting that the target object is in the first posture, performing first image processing on the target object, wherein the first image processing is used for generating a first image at a preset position of the target object, and processing the at least two target object areas respectively by using different materials.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Wherein the name of an element does not in some cases constitute a limitation on the element itself.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Claims (13)
1. An image processing method based on target object pose, comprising:
acquiring an original image from an image source, wherein the original image comprises a target object;
segmenting the target object from the original image to generate a segmented target object;
dividing the segmented target object into at least two target object regions;
detecting a pose of the target object;
in response to detecting that the target object is in the first posture, performing first image processing on the target object, wherein the first image processing is used for generating a first image at a preset position of the target object, and processing the at least two target object areas respectively by using different materials.
2. The target object pose-based image processing method of claim 1, further comprising, after said first image processing of the target object in response to detecting that the target object is in a first pose:
in response to detecting that the target object is in a second pose, switching the first image processing to a second image processing and performing the second image processing on the target object, wherein the second image processing is to generate a second image at a predetermined position of the target object.
3. The target object pose-based image processing method of claim 2, further comprising, after said switching the first image processing to second image processing in response to detecting that the target object is in a second pose:
in response to detecting again that the target object is in the second pose, switching the second image processing to the first image processing.
4. The target object pose-based image processing method of claim 1, wherein segmenting the target object from the original image to generate a segmented target object, comprises:
detecting a target object in the original image and generating an outer frame of the target object;
and extracting the image in the external frame and extracting the target object from the image in the external frame.
5. The target object pose-based image processing method of claim 1, wherein said dividing said segmented target object into at least two target object regions comprises:
carrying out gray level processing on the image of the segmentation target object to obtain a gray level image of the target object;
sorting pixel values in the gray scale map;
and intercepting the pixel values according to a plurality of preset proportional ranges to form at least two target object areas.
6. The target object pose-based image processing method of claim 5, wherein said truncating said pixel values according to a preset plurality of scale ranges to form at least two target object regions comprises:
intercepting the pixel values according to a plurality of preset proportional ranges;
performing Gaussian smoothing on pixel values in at least one proportion range;
and forming at least two target object areas by taking the plurality of scale ranges as boundaries.
7. The target object pose-based image processing method of claim 1, wherein said detecting a pose of the target object comprises:
inputting the segmented target object into a target object posture classifier;
and determining the posture of the target object according to the output result of the target object posture classifier.
8. The target object pose-based image processing method of claim 1, wherein said first image processing of the target object in response to detecting that the target object is in a first pose comprises:
when the target object is detected to be in the first posture, determining a first position according to the key point of the target object;
generating a first image at the first location, the first image being a sequence of frames comprising a plurality of image frames;
acquiring at least two different materials corresponding to the at least two target object areas respectively, wherein the number of the materials is the same as that of the target object areas, and the materials are in one-to-one correspondence with the target object areas;
processing the at least two target object regions using the at least two different materials, respectively.
9. The target object pose based image processing method of claim 2, wherein said switching the first image processing to the second image processing in response to detecting that the target object is in the second pose comprises:
when the target object is detected to be in the second posture, determining a second position according to the key point of the target object;
generating a second image at the second location, the second image being a sequence of frames comprising a plurality of image frames.
10. The target object pose-based image processing method of claim 3, wherein switching the second image processing to the first image processing in response to again detecting that the target object is in the second pose comprises:
when the target object in the current frame of the original image is in the second posture and the target object in the previous frame of the current frame is in the non-second posture, judging that the target is detected to be in the second posture again;
switching the second image processing to the first image processing.
11. An image processing apparatus based on a target object pose, comprising:
the system comprises an original image acquisition module, a target object acquisition module and a target object acquisition module, wherein the original image acquisition module is used for acquiring an original image from an image source, and the original image comprises the target object;
a target object segmentation module for segmenting the target object from the original image to generate a segmented target object;
a region dividing module for dividing the segmentation target object into at least two target object regions;
the gesture detection module is used for detecting the gesture of the target object;
the first image processing module is used for responding to the detection that the target object is in the first posture, performing first image processing on the target object, wherein the first image processing is used for generating a first image on a preset position of the target object, and processing the at least two target object areas by using different materials respectively.
12. An electronic device, comprising:
a memory for storing computer readable instructions; and
a processor for executing the computer readable instructions such that the processor when executed implements the method of image processing based on target object poses according to any of claims 1-10.
13. A non-transitory computer readable storage medium storing computer readable instructions which, when executed by a computer, cause the computer to perform the target object pose-based image processing method of any one of claims 1-10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910357692.8A CN110084204B (en) | 2019-04-29 | 2019-04-29 | Image processing method and device based on target object posture and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910357692.8A CN110084204B (en) | 2019-04-29 | 2019-04-29 | Image processing method and device based on target object posture and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110084204A true CN110084204A (en) | 2019-08-02 |
CN110084204B CN110084204B (en) | 2020-11-24 |
Family
ID=67417739
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910357692.8A Active CN110084204B (en) | 2019-04-29 | 2019-04-29 | Image processing method and device based on target object posture and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110084204B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110401868A (en) * | 2019-08-23 | 2019-11-01 | 北京达佳互联信息技术有限公司 | A kind of material methods of exhibiting and device |
CN110825286A (en) * | 2019-10-30 | 2020-02-21 | 北京字节跳动网络技术有限公司 | Image processing method and device and electronic equipment |
CN111242881A (en) * | 2020-01-07 | 2020-06-05 | 北京字节跳动网络技术有限公司 | Method, device, storage medium and electronic equipment for displaying special effects |
CN112818842A (en) * | 2021-01-29 | 2021-05-18 | 徐文海 | Intelligent image recognition swimming timing system and timing method based on machine learning |
CN114051632A (en) * | 2021-06-22 | 2022-02-15 | 商汤国际私人有限公司 | Human body and human hand association method, device, equipment and storage medium |
CN114372931A (en) * | 2021-12-31 | 2022-04-19 | 北京旷视科技有限公司 | Target object blurring method and device, storage medium and electronic equipment |
WO2024067396A1 (en) * | 2022-09-28 | 2024-04-04 | 北京字跳网络技术有限公司 | Image processing method and apparatus, and device and medium |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11120338A (en) * | 1997-10-09 | 1999-04-30 | Tech Res & Dev Inst Of Japan Def Agency | Image processing evaluation system |
CN101923637A (en) * | 2010-07-21 | 2010-12-22 | 康佳集团股份有限公司 | Mobile terminal as well as human face detection method and device thereof |
CN105264548A (en) * | 2013-03-08 | 2016-01-20 | 微软技术许可有限责任公司 | Inconspicuous tag for generating augmented reality experiences |
US20160080658A1 (en) * | 2014-09-12 | 2016-03-17 | Canon Kabushiki Kaisha | Position control device and position control method, optical device, and image pickup apparatus |
CN106251404A (en) * | 2016-07-19 | 2016-12-21 | 央数文化(上海)股份有限公司 | Orientation tracking, the method realizing augmented reality and relevant apparatus, equipment |
CN106406504A (en) * | 2015-07-27 | 2017-02-15 | 常州市武进区半导体照明应用技术研究院 | Atmosphere rendering system and method of man-machine interaction interface |
CN107911643A (en) * | 2017-11-30 | 2018-04-13 | 维沃移动通信有限公司 | Show the method and apparatus of scene special effect in a kind of video communication |
CN108111911A (en) * | 2017-12-25 | 2018-06-01 | 北京奇虎科技有限公司 | Video data real-time processing method and device based on the segmentation of adaptive tracing frame |
KR20180092674A (en) * | 2017-02-10 | 2018-08-20 | 엘아이지넥스원 주식회사 | Apparatus and method for measuring target signal in SWIR band |
CN108537867A (en) * | 2018-04-12 | 2018-09-14 | 北京微播视界科技有限公司 | According to the Video Rendering method and apparatus of user's limb motion |
CN108776822A (en) * | 2018-06-22 | 2018-11-09 | 腾讯科技(深圳)有限公司 | Target area detection method, device, terminal and storage medium |
CN109085931A (en) * | 2018-07-25 | 2018-12-25 | 南京禹步信息科技有限公司 | A kind of interactive input method, device and storage medium that actual situation combines |
CN109191548A (en) * | 2018-08-28 | 2019-01-11 | 百度在线网络技术(北京)有限公司 | Animation method, device, equipment and storage medium |
CN109360222A (en) * | 2018-10-25 | 2019-02-19 | 北京达佳互联信息技术有限公司 | Image partition method, device and storage medium |
CN109462776A (en) * | 2018-11-29 | 2019-03-12 | 北京字节跳动网络技术有限公司 | A kind of special video effect adding method, device, terminal device and storage medium |
CN109544444A (en) * | 2018-11-30 | 2019-03-29 | 深圳市脸萌科技有限公司 | Image processing method, device, electronic equipment and computer storage medium |
CN109618183A (en) * | 2018-11-29 | 2019-04-12 | 北京字节跳动网络技术有限公司 | A kind of special video effect adding method, device, terminal device and storage medium |
CN109657537A (en) * | 2018-11-05 | 2019-04-19 | 北京达佳互联信息技术有限公司 | Image-recognizing method, system and electronic equipment based on target detection |
-
2019
- 2019-04-29 CN CN201910357692.8A patent/CN110084204B/en active Active
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11120338A (en) * | 1997-10-09 | 1999-04-30 | Tech Res & Dev Inst Of Japan Def Agency | Image processing evaluation system |
CN101923637A (en) * | 2010-07-21 | 2010-12-22 | 康佳集团股份有限公司 | Mobile terminal as well as human face detection method and device thereof |
CN105264548A (en) * | 2013-03-08 | 2016-01-20 | 微软技术许可有限责任公司 | Inconspicuous tag for generating augmented reality experiences |
US20160080658A1 (en) * | 2014-09-12 | 2016-03-17 | Canon Kabushiki Kaisha | Position control device and position control method, optical device, and image pickup apparatus |
CN106406504A (en) * | 2015-07-27 | 2017-02-15 | 常州市武进区半导体照明应用技术研究院 | Atmosphere rendering system and method of man-machine interaction interface |
CN106251404A (en) * | 2016-07-19 | 2016-12-21 | 央数文化(上海)股份有限公司 | Orientation tracking, the method realizing augmented reality and relevant apparatus, equipment |
KR20180092674A (en) * | 2017-02-10 | 2018-08-20 | 엘아이지넥스원 주식회사 | Apparatus and method for measuring target signal in SWIR band |
CN107911643A (en) * | 2017-11-30 | 2018-04-13 | 维沃移动通信有限公司 | Show the method and apparatus of scene special effect in a kind of video communication |
CN108111911A (en) * | 2017-12-25 | 2018-06-01 | 北京奇虎科技有限公司 | Video data real-time processing method and device based on the segmentation of adaptive tracing frame |
CN108537867A (en) * | 2018-04-12 | 2018-09-14 | 北京微播视界科技有限公司 | According to the Video Rendering method and apparatus of user's limb motion |
CN108776822A (en) * | 2018-06-22 | 2018-11-09 | 腾讯科技(深圳)有限公司 | Target area detection method, device, terminal and storage medium |
CN109085931A (en) * | 2018-07-25 | 2018-12-25 | 南京禹步信息科技有限公司 | A kind of interactive input method, device and storage medium that actual situation combines |
CN109191548A (en) * | 2018-08-28 | 2019-01-11 | 百度在线网络技术(北京)有限公司 | Animation method, device, equipment and storage medium |
CN109360222A (en) * | 2018-10-25 | 2019-02-19 | 北京达佳互联信息技术有限公司 | Image partition method, device and storage medium |
CN109657537A (en) * | 2018-11-05 | 2019-04-19 | 北京达佳互联信息技术有限公司 | Image-recognizing method, system and electronic equipment based on target detection |
CN109462776A (en) * | 2018-11-29 | 2019-03-12 | 北京字节跳动网络技术有限公司 | A kind of special video effect adding method, device, terminal device and storage medium |
CN109618183A (en) * | 2018-11-29 | 2019-04-12 | 北京字节跳动网络技术有限公司 | A kind of special video effect adding method, device, terminal device and storage medium |
CN109544444A (en) * | 2018-11-30 | 2019-03-29 | 深圳市脸萌科技有限公司 | Image processing method, device, electronic equipment and computer storage medium |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110401868A (en) * | 2019-08-23 | 2019-11-01 | 北京达佳互联信息技术有限公司 | A kind of material methods of exhibiting and device |
CN110825286A (en) * | 2019-10-30 | 2020-02-21 | 北京字节跳动网络技术有限公司 | Image processing method and device and electronic equipment |
CN111242881A (en) * | 2020-01-07 | 2020-06-05 | 北京字节跳动网络技术有限公司 | Method, device, storage medium and electronic equipment for displaying special effects |
CN112818842A (en) * | 2021-01-29 | 2021-05-18 | 徐文海 | Intelligent image recognition swimming timing system and timing method based on machine learning |
CN114051632A (en) * | 2021-06-22 | 2022-02-15 | 商汤国际私人有限公司 | Human body and human hand association method, device, equipment and storage medium |
CN114372931A (en) * | 2021-12-31 | 2022-04-19 | 北京旷视科技有限公司 | Target object blurring method and device, storage medium and electronic equipment |
WO2024067396A1 (en) * | 2022-09-28 | 2024-04-04 | 北京字跳网络技术有限公司 | Image processing method and apparatus, and device and medium |
Also Published As
Publication number | Publication date |
---|---|
CN110084204B (en) | 2020-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110084204B (en) | Image processing method and device based on target object posture and electronic equipment | |
CN110070551B (en) | Video image rendering method and device and electronic equipment | |
CN108594997B (en) | Gesture skeleton construction method, device, equipment and storage medium | |
CN108229277B (en) | Gesture recognition method, gesture control method, multilayer neural network training method, device and electronic equipment | |
CN112162930B (en) | Control identification method, related device, equipment and storage medium | |
CN108053381B (en) | Dynamic tone mapping method, mobile terminal and computer-readable storage medium | |
CN110070063B (en) | Target object motion recognition method and device and electronic equipment | |
CN112241714B (en) | Method and device for identifying designated area in image, readable medium and electronic equipment | |
CN112950525B (en) | Image detection method and device and electronic equipment | |
CN111771226A (en) | Electronic device, image processing method thereof, and computer-readable recording medium | |
CN109685746A (en) | Brightness of image method of adjustment, device, storage medium and terminal | |
CN110059685A (en) | Word area detection method, apparatus and storage medium | |
CN110069974B (en) | Highlight image processing method and device and electronic equipment | |
CN109145970B (en) | Image-based question and answer processing method and device, electronic equipment and storage medium | |
CN111950570B (en) | Target image extraction method, neural network training method and device | |
CN113205515B (en) | Target detection method, device and computer storage medium | |
CN112308797A (en) | Corner detection method and device, electronic equipment and readable storage medium | |
CN113391779A (en) | Parameter adjusting method, device and equipment for paper-like screen | |
CN111199169A (en) | Image processing method and device | |
CN112102207A (en) | Method and device for determining temperature, electronic equipment and readable storage medium | |
CN110222576B (en) | Boxing action recognition method and device and electronic equipment | |
CN110197459B (en) | Image stylization generation method and device and electronic equipment | |
CN109658360B (en) | Image processing method and device, electronic equipment and computer storage medium | |
CN110209861A (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN111292247A (en) | Image processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |