CN111199169A - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN111199169A
CN111199169A CN201811368722.7A CN201811368722A CN111199169A CN 111199169 A CN111199169 A CN 111199169A CN 201811368722 A CN201811368722 A CN 201811368722A CN 111199169 A CN111199169 A CN 111199169A
Authority
CN
China
Prior art keywords
image
human hand
hand
processing
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811368722.7A
Other languages
Chinese (zh)
Inventor
罗国中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Microlive Vision Technology Co Ltd
Original Assignee
Beijing Microlive Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Microlive Vision Technology Co Ltd filed Critical Beijing Microlive Vision Technology Co Ltd
Priority to CN201811368722.7A priority Critical patent/CN111199169A/en
Publication of CN111199169A publication Critical patent/CN111199169A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure discloses an image processing method, an image processing device, an electronic device and a computer-readable storage medium. The image processing method comprises the following steps: acquiring a video image; identifying hands in the video image to obtain hand information; tracking the movement of the human hand according to the human hand information, and determining the area through which the human hand passes; processing the image in the region. The embodiment of the disclosure processes the image area passed by the hand by tracking the movement of the hand, and solves the technical problem that the image processing area can not be flexibly set in the prior art.

Description

Image processing method and device
Technical Field
The present disclosure relates to the field of images, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of computer technology, the application range of the intelligent terminal is widely improved, for example, the intelligent terminal can listen to music, play games, chat on internet, take pictures and the like. For the photographing technology of the intelligent terminal, the photographing pixels of the intelligent terminal reach more than ten million pixels, and the intelligent terminal has higher definition and the photographing effect comparable to that of a professional camera.
At present, when an intelligent terminal is used for photographing, not only can photographing effects of traditional functions be realized by using photographing software built in when the intelligent terminal leaves a factory, but also photographing effects with additional functions can be realized by downloading an application (APP for short) from a network end, for example, the APP with functions of dark light detection, a beauty camera, super pixels and the like can be realized. The beautifying function of the intelligent terminal generally comprises beautifying processing effects of skin color adjustment, skin grinding, large eye, face thinning and the like, and can perform beautifying processing of the same degree on all faces recognized in an image. At present, other image processing can be performed on the image shot by the intelligent terminal through the APP, such as adding some features.
However, the processing of the image described above can only be performed for a specific area, such as a full image or a pre-designated area, such as a predetermined size area in the center of the screen; if the processing area needs to be changed, the processing area needs to be reset, and the method is very inflexible and tedious in operation.
Disclosure of Invention
In a first aspect, an embodiment of the present disclosure provides an image processing method, including: acquiring a video image; identifying hands in the video image to obtain hand information; tracking the movement of the human hand according to the human hand information, and determining the area through which the human hand passes; processing the image in the region.
Further, the identifying the human hand in the video image to obtain human hand information includes: and identifying the hands in the video, and acquiring the positions of the hands and key points of the hands.
Further, the identifying the human hand in the video image to obtain the human hand information further includes: and obtaining the outline area of the human hand according to the human hand key points.
Further, the tracking the movement of the human hand according to the human hand information to determine the area through which the human hand passes includes: determining the moving track of the human hand according to the changing track of the same human hand information; and determining the area passed by the human hand according to the moving track.
Further, the video image includes a foreground image and a background image, and the processing the image in the region includes: and performing first processing on the foreground image in the region, and mixing the processed foreground image with the background image.
Further, the video image includes a foreground image and a background image, and the processing the image in the region includes: and performing first processing on the foreground image in the region, performing second processing on the background image, and mixing the processed foreground image and the processed background image.
Further, the tracking the movement of the human hand according to the human hand information to determine the area through which the human hand passes includes: recognizing the hand gesture of the hand according to the hand information; when the gesture is a first gesture, the movement of the hand is tracked, and the area where the hand passes is determined according to the hand information of the first gesture.
Further, before tracking the movement of the human hand according to the human hand information and determining the area through which the human hand passes, the method further includes: and acquiring a template image, wherein the template image is used for recording a history area passed by a human hand.
Further, in the template image, the pixel value of the area passed by the human hand is a first value, and the pixel value of the area not passed by the human hand is a second value.
Further, before the acquiring the video image, the method further includes: setting the processing parameters, wherein the parameters determine the mode of processing the image.
In a second aspect, an embodiment of the present disclosure provides an image processing apparatus, including:
the image acquisition module is used for acquiring a video image;
the human hand information acquisition module is used for identifying human hands in the video image to obtain human hand information;
the region determining module is used for tracking the movement of the human hand according to the human hand information and determining a region through which the human hand passes;
and the image processing module is used for processing the image in the area.
Further, the human hand information obtaining module is further configured to:
and identifying the hands in the video, and acquiring the positions of the hands and key points of the hands.
Further, the human hand information obtaining module is further configured to:
and obtaining the outline area of the human hand according to the human hand key points.
Further, the region determining module is further configured to:
determining the moving track of the human hand according to the changing track of the same human hand information;
and determining the area passed by the human hand according to the moving track.
Further, the video image includes a foreground image and a background image, and the image processing module 404 is further configured to:
and performing first processing on the foreground image in the region, and mixing the processed foreground image with the background image.
Further, the video image includes a foreground image and a background image, and the image processing module 404 is further configured to:
and performing first processing on the foreground image in the region, performing second processing on the background image, and mixing the processed foreground image and the processed background image.
Further, the region determining module is further configured to:
recognizing the hand gesture of the hand according to the hand information;
when the gesture is a first gesture, the movement of the hand is tracked, and the area where the hand passes is determined according to the hand information of the first gesture.
Further, the image processing apparatus further includes:
the template image acquisition module is used for acquiring a template image, and the template image is used for recording a history area passed by a human hand.
Further, in the template image, the pixel value of the area passed by the human hand is a first value, and the pixel value of the area not passed by the human hand is a second value.
Further, the image processing apparatus further includes:
and the parameter setting module is used for setting the processing parameters, and the parameters determine the mode of processing the image.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image processing method of any of the preceding first aspects.
In a fourth aspect, the present disclosure provides a non-transitory computer-readable storage medium, which stores computer instructions for causing a computer to execute the image processing method according to any one of the foregoing first aspects.
The disclosure discloses an image processing method, an image processing device, an electronic device and a computer-readable storage medium. The image processing method comprises the following steps: acquiring a video image; identifying hands in the video image to obtain hand information; tracking the movement of the human hand according to the human hand information, and determining the area through which the human hand passes; processing the image in the region. The embodiment of the disclosure processes the image area passed by the hand by tracking the movement of the hand, and solves the technical problem that the image processing area can not be flexibly set in the prior art.
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical means of the present disclosure, the present disclosure may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and other drawings can be obtained according to the drawings without creative efforts for those skilled in the art.
Fig. 1 is a flowchart of a first embodiment of an image processing method according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a second embodiment of an image processing method according to the present disclosure;
3a-3f are schematic diagrams of specific examples of image processing methods provided by embodiments of the present disclosure;
fig. 4 is a schematic structural diagram of an embodiment of an image processing apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device provided according to an embodiment of the present disclosure.
Detailed Description
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
Fig. 1 is a flowchart of a first embodiment of an image processing method provided in this embodiment of the present disclosure, where the image processing method provided in this embodiment may be executed by an image processing apparatus, the image processing apparatus may be implemented as software, or implemented as a combination of software and hardware, and the image processing apparatus may be integrated in a certain device in an image processing system, such as an image processing server or an image processing terminal device. As shown in fig. 1, the method comprises the steps of:
step S101, acquiring a video image;
the acquired video may be acquired by an image sensor, which refers to various devices that can capture images, and typical image sensors are video cameras, still cameras, and the like. In this embodiment, the image sensor may be a camera on a mobile terminal, such as a front-facing or rear-facing camera on a smart phone, and a video image acquired by the camera may be directly displayed on a display screen of the smart phone.
The video can also comprise a human hand, and the human hand can be a human hand collected by the image sensor.
Step S102: identifying hands in the video image to obtain hand information;
in this step, the human hand is recognized, and information of the human hand is acquired. When the human hand is recognized, the position of the human hand can be positioned by using the color features, the human hand is segmented from the background, and feature extraction and recognition are carried out on the found and segmented human hand image. Specifically, color information of an image and position information of the color information are acquired by using an image sensor; comparing the color information with preset hand color information; identifying first color information, wherein the error between the first color information and the preset human hand color information is smaller than a first threshold value; and forming the outline of the human hand by using the position information of the first color information. Preferably, in order to avoid interference of the ambient brightness to the color information, image data of an RGB color space acquired by the image sensor may be mapped to an HSV color space, information in the HSV color space is used as contrast information, and preferably, a hue value in the HSV color space is used as color information, so that the hue information is minimally affected by brightness, and the interference of the brightness can be well filtered. It is understood that other ways of roughly positioning the position of the human hand may be used, which are only examples and do not limit the disclosure, and the other ways of positioning are not described herein. The position of the human hand is roughly determined by using the human hand outline, and then the key point extraction is carried out on the human hand. The method comprises the steps of extracting key points of a human hand on an image, namely searching corresponding position coordinates of each key point of a human hand outline in a human hand image, namely key point positioning, wherein the process needs to be carried out based on the corresponding characteristics of the key points, searching and comparing in the image according to the characteristics after the image characteristics capable of clearly identifying the key points are obtained, and accurately positioning the positions of the key points on the image. Since the keypoints only occupy a very small area (usually only a few to tens of pixels) in the image, the regions occupied by the features corresponding to the keypoints on the image are usually very limited and local, and there are two feature extraction methods currently used: (1) extracting one-dimensional range image features vertical to the contour; (2) and extracting the two-dimensional range image characteristics of the key point square neighborhood. There are many ways to implement the above two methods, such as ASM and AAM methods, statistical energy function methods, regression analysis methods, deep learning methods, classifier methods, batch extraction methods, and so on. The number, accuracy and speed of the key points used by the various implementation methods are different, and the method is suitable for different application scenes. Similarly, for other target objects, the same principles can be used to identify the target object.
After the human hand is recognized, a polygon is defined outside the outer contour of the human hand to serve as an external detection frame of the human hand, the external detection frame is used for replacing the human hand and describing the position of the human hand, a rectangle is taken as an example, after key points of the human hand are recognized, the width of the widest part of the human hand and the length of the longest part of the human hand can be calculated, and the external detection frame of the human hand is recognized according to the width and the length. One implementation of calculating the longest and widest points of the human hand is to extract the boundary key points of the human hand, calculate the difference between the X coordinates of the two boundary key points with the farthest X coordinate distance as the length of the rectangle width, and calculate the difference between the Y coordinates of the two boundary key points with the farthest Y coordinate distance as the length of the rectangle length. If the hand contracts into a fist shape, the external detection frame can be set to be a minimum circle covering the fist. Specifically, the center point of the external detection frame can be used as the position of the hand, and the center point of the external detection frame is the intersection point of the diagonals of the external detection frame; the centre of the circle may also be substituted for the location of the fist.
The human hand identification further includes detected human hand key points, the number of the key points may be set, and generally, the key points may include key points of a human hand contour and joint key points, each key point has a fixed number, for example, the key points may be numbered from top to bottom according to the sequence of the contour key point, the thumb joint key point, the index finger joint key point, the middle finger joint key point, the ring finger joint key point, and the little finger joint key point, in a typical application, the number of the key points is 22, and each key point has a fixed number. In one embodiment, the location of the human hand may also be represented using a keypoint of the palm center.
In one embodiment, the connection area of the key points is used as the outline area of the human hand, the connection between the key points can be performed by using a preset connection rule, such as that the key points on each finger are connected in sequence, the outline key points are connected in sequence, and the like, and the connected connection area is used as the outline area of the human hand. In this embodiment, in order to smooth the human hand contour region, the human hand contour region may be subjected to a certain processing, and one specific processing method is as follows: forming a matrix by pixel points in the human hand outline region; and processing the matrix formed by the pixel points by using a preset processing matrix, and using the contour region formed by the processed pixel points as the finally used human hand contour region. For ease of understanding, an example of the above processing is as follows: the pixel points in the human hand outline region form a matrix as follows:
Figure BDA0001869262060000081
the processing matrix is as follows:
Figure BDA0001869262060000082
and (4) translating the processing matrix, calculating element values in a new processing matrix when the processing matrix is overlapped with a pixel point forming matrix of the human hand contour area, taking the maximum value of the calculated elements as a new value of a central element of the processing matrix, and processing until all the elements of the pixel point forming matrix of the human hand contour area are processed. Specifically, in the above example, taking the first value as an example, two sub-matrices are added:
Figure BDA0001869262060000091
the value of the element at the first central point is 3, and the above operations are sequentially performed on each element of the matrix formed by the pixel points in the human hand outline area, so as to obtain a new matrix:
Figure BDA0001869262060000092
through the processing, the outline area of the human hand is enlarged, and the edge is smoother.
In one embodiment, when the human hand is recognized, the method further comprises the step of performing smoothing and coordinate normalization processing on the recognition data of the human hand. Specifically, the smoothing process may be averaging images in the multi-frame video, taking the averaged image as an identified image, corresponding to a human hand in the present disclosure, identifying the human hand in the multi-frame image, then performing weighted averaging on the human hand image, taking the human hand image obtained after averaging as the identified human hand, and calculating the human hand information. The coordinate normalization processing is to unify the coordinate range, and if the coordinates of the hand image collected by the camera and the hand image displayed on the display screen are not unified, a mapping relation is needed to map the large coordinate system to the small coordinate system. And obtaining the information of the human hand after smoothing processing and normalization processing.
Step S103: tracking the movement of the human hand according to the human hand information, and determining the area through which the human hand passes;
in this embodiment, the gesture of the human hand and/or the motion trajectory of the human hand may be recognized according to the human hand information;
the gesture recognition can be performed by using the hand image information and putting the hand image information into a deep learning model for recognition, and if the key point information of the hand is input into the deep learning model, the gesture of the hand is recognized, which is not described herein again.
The tracking of the motion trail of the human hand firstly needs to track the motion of the human hand, in a human hand motion recognition system based on vision, the tracking of the motion trail of the human hand is to track the position change of a gesture in a picture sequence to obtain the position information of the human hand in continuous time, and the quality of the tracking effect of the motion trail of the human hand directly influences the recognition effect of the human hand motion. Commonly used motion tracking methods include a particle filter algorithm, a Mean-shift algorithm, a kalman filter method, a skeletal tracking method, and the like.
The target tracking based on the particle filtering is a random search process for obtaining posterior probability estimation of target distribution in a random motion model, and the particle filtering mainly comprises two steps of primary sampling and repeated sampling. The preliminary sampling is to randomly place particles in an image, then calculate the similarity of each particle and the characteristics of a tracking target, and further obtain the weight of each particle. The resampling stage mainly changes the distribution of the particles according to the weight of the particles in the preliminary sampling. And repeating the processes of preliminary sampling and resampling until the target is tracked.
The Mean-shift method (Mean-shift) is a non-parametric probability density gradient estimation algorithm. Firstly, establishing a hand model, namely calculating the probability of characteristic values of pixels belonging to a hand in an initial image frame in a characteristic space; then establishing a model of the current frame, and calculating the probability of characteristic values of all pixels in a region where the human hand possibly exists; and finally, obtaining the human hand mean value drift amount by solving the similarity of the initial human hand model and the human hand model of the current frame. And according to the convergence of the mean shift algorithm, iteratively calculating the mean shift amount of the hand to achieve the aim of converging to the position of the hand in the current image frame.
Kalman filtering is the prediction of the state of a linear system in the present or future using a series of mathematical equations. In tracking the motion trajectory of the human hand, Kalman filtering is mainly used for observing the position information of the human hand in a series of image frames and then predicting the position of the human hand in the next frame. Because the kalman filtering is established on the assumption of posterior probability estimation of each time interval, the kalman filtering method can achieve a better tracking effect in a gaussian distribution environment. The method can remove noise and still obtain a good human hand tracking effect under the gesture deformation.
With the widespread application of Microsoft Kinect, many researchers use the skeletal point tracking specific to Microsoft Kinect sensors to conduct human hand tracking research. Within the field of view of the sensor, the Kinect can provide one or two complete skeletal tracings for the user, i.e. 20 joint points throughout the body. The skeletal point tracking is divided into active tracking and passive tracking, in the active tracking mode, two possible users are selected for tracking in a visual field, in the passive tracking mode, the skeletal points of at most 6 users can be tracked, and the redundant four users only perform position tracking. The principle of Kinect's skeleton tracking is that on the basis of the depth image that obtains, through the method of classifying and machine learning 32 parts of human body, the skeleton joint point information of each part is found.
In one embodiment, the moving track of the human hand can be determined according to the changing track of the same human hand information. Since key points of the human hand skeleton can be collected in this step, a human hand motion trajectory tracking method based on skeleton tracking can be preferentially used in the present disclosure. In the disclosure, the moving distance of a key point of a human hand in two consecutive frames of images can be calculated, when the distance is smaller than a preset threshold, the position of the key point is considered to be unchanged, when several preset consecutive frames of the key point are kept unchanged, the position of the hand is identified as the start point or the end point of the human hand, typically, the threshold can be set to 1cm, and when the position of the key point in 6 consecutive frames is not changed, the position of the human hand is taken as the start point or the end point of the human hand. And then calculating the positions of key points in the image frames between the starting point and the end point, wherein the tracks formed by the key points in all the image frames are the motion tracks of the human hand.
In one embodiment, the area passed by the human hand is determined according to the moving track. The human hand contour area and the human hand position in each image frame in the image can be determined through the motion trail of the human hand, the human hand contour area in each image frame is recorded and is superposed in the current image frame, and the area passed by the human hand can be obtained. In one embodiment, before tracking the movement of the human hand according to the human hand information and determining the area passed by the human hand, a template image can be acquired, and the template image is used for recording the historical area passed by the human hand. In the template image, the pixel value of the area passed by the human hand is a first value, and the pixel value of the area not passed by the human hand is a second value. Specifically, all pixel values of the template image in the initial state are 1, when a human hand passes through the template image, the pixel values are set to be 0, the current image frame and the template image are overlapped, and the area, through which the human hand passes, in the current image frame is determined.
In one embodiment, the gesture of the human hand is recognized according to the human hand information; when the gesture is a first gesture, the movement of the hand is tracked, and the area where the hand passes is determined according to the hand information of the first gesture. Specifically, according to key points of a human hand, gestures of the human hand are recognized, wherein typical gestures comprise five fingers opening, an index finger stretching, a fist making and the like; when the gesture is a preset gesture, tracking the movement of the hand, and determining the area where the hand passes according to the hand information of the preset gesture. Specifically, in an example, the gesture is stretching out of an index finger, a human hand contour area is determined according to the tip of the index finger, a moving track of a human hand is determined according to the moving track of key points of the tip of the index finger, and an area through which the tip of the index finger passes is determined according to the human hand contour area and the moving track of the human hand.
Step S104: processing the image in the region;
in this step, image processing is performed on the image in the region where the human hand passes, which is determined in step S103, and the image processing may be any processing, such as deformation, filtering, color card superimposition, beauty, transparency change, color change, and the like, which may be single image processing or a combination of a plurality of image processing, and this disclosure does not limit this.
In one embodiment, the video image includes a foreground image and a background image, and the processing the image in the region includes: and performing first processing on the foreground image in the region, and mixing the processed foreground image with the background image. The foreground image may be a mask image, and the background image may be an actual image collected by the image sensor. In this embodiment, only the foreground image is subjected to image processing, and then the processed foreground image and the background image are mixed, where the mixing may be any form of mixing, and a typical mixing is, for example, 1:1 mixing of colors is performed on the foreground image and the background image, and the mixed color is generated as the color of the mixed image, and the form of mixing is not limited in the present disclosure. In this embodiment, the first process may be any of the image processes described above. In one embodiment, the foreground may be a mask image of the water fog effect, and the first treatment is to adjust the transparency to 100%, and then the area where the human hand passes through is displayed as transparent, and after being mixed with the background image, the effect of wiping the water fog is displayed.
In one embodiment, the video image comprises a foreground image and a background image, and the processing the image in the region comprises: and performing first processing on the foreground image in the region, performing second processing on the background image, and mixing the processed foreground image and the processed background image. The foreground image may be a mask image, and the background image may be an actual image collected by the image sensor. In this embodiment, both the foreground image and the background image are processed, the first processing and the second processing may be different, and then the processed foreground image and the processed background image are mixed, the mixing may be any form of mixing, for example, the foreground image and the background image are mixed in a ratio of 1:1 of colors, and the mixed color is generated as the color of the mixed image, and the form of mixing is not limited in the present disclosure. In this embodiment, the first process and the second process may be any of the image processes described above. In a specific example, the foreground may be a mask image of a water mist effect, the first processing may be to adjust the transparency to 100%, at which time, the region through which the human hand passes is displayed as transparent, and the second processing may be to perform a skin grinding process on the human face, at which time, when the region through which the human hand passes becomes transparent and the human face of the background image is exposed, the human face is subjected to the skin grinding process, and the foreground image and the background image are mixed. In this embodiment, the image processing may also be performed by directly performing second processing on the entire image of the background image, and then mixing the processed foreground image and the processed background image.
As shown in fig. 2, before the acquiring a video image, the image processing method of the present disclosure may further include:
step S201: setting the processing parameters, wherein the parameters determine the mode of processing the image.
In this embodiment, the processing parameters may include the type of processing, the coefficient of processing, and the like for setting the type of image processing, the degree of processing, and the like. In a specific example, the processing parameter may set the type of the first processing to adjust transparency, and the type of the second processing to be a face polishing processing, where the transparency is set to 100%, and the degree of the face polishing is set to 50%. It is to be understood that the above examples are only examples and not exhaustive, and any parameter for setting image processing may be used in the present disclosure, and the present disclosure is not particularly limited thereto.
For ease of understanding, reference is made to fig. 3a-3f for a specific example of an image processing method disclosed in the present disclosure. Referring to fig. 3a, the acquired video image is an image obtained by mixing a video image acquired from a camera of a mobile phone with a water mist image mask; as shown in fig. 3b, recognizing the human hand in the video image, obtaining human hand information, the contour and the position of the human hand, tracking the movement of the human hand according to the contour and the position of the human hand, confirming the area passed by the human hand, and performing image processing on the mask image in the area: and (4) performing transparency processing to achieve the effect of removing the water mist by the palm, further, moving the hand continuously, and performing transparency processing on the image in the area as the passing area of the hand becomes larger and larger as shown in fig. 3c-3f, wherein the water mist is less and less until the water mist is basically eliminated completely. This example corresponds to the above-described embodiment in which the foreground image is mixed with the background after the first processing.
The disclosure discloses an image processing method, an image processing device, an electronic device and a computer-readable storage medium. The image processing method comprises the following steps: acquiring a video image; identifying hands in the video image to obtain hand information; tracking the movement of the human hand according to the human hand information, and determining the area through which the human hand passes; processing the image in the region. The embodiment of the disclosure processes the image area passed by the hand by tracking the movement of the hand, and solves the technical problem that the image processing area can not be flexibly set in the prior art.
Fig. 4 is a schematic structural diagram of an embodiment of an image processing apparatus according to an embodiment of the present disclosure, and as shown in fig. 4, the apparatus 400 includes: an image acquisition module 401, a human hand information acquisition module 402, an area determination module 403, and an image processing module 404. Wherein,
an image obtaining module 401, configured to obtain a video image;
a hand information obtaining module 402, configured to identify a hand in the video image, and obtain hand information;
an area determining module 403, configured to track movement of the human hand according to the human hand information, and determine an area through which the human hand passes;
an image processing module 404, configured to process the image in the region.
Further, the human hand information obtaining module 402 is further configured to:
and identifying the hands in the video, and acquiring the positions of the hands and key points of the hands.
Further, the human hand information obtaining module 402 is further configured to:
and obtaining the outline area of the human hand according to the human hand key points.
Further, the region determining module 403 is further configured to:
determining the moving track of the human hand according to the changing track of the same human hand information;
and determining the area passed by the human hand according to the moving track.
Further, the video image includes a foreground image and a background image, and the image processing module 404 is further configured to:
and performing first processing on the foreground image in the region, and mixing the processed foreground image with the background image.
Further, the video image includes a foreground image and a background image, and the image processing module 404 is further configured to:
and performing first processing on the foreground image in the region, performing second processing on the background image, and mixing the processed foreground image and the processed background image.
Further, the region determining module 403 is further configured to:
recognizing the hand gesture of the hand according to the hand information;
when the gesture is a first gesture, the movement of the hand is tracked, and the area where the hand passes is determined according to the hand information of the first gesture.
Further, the image processing apparatus 400 further includes:
the template image acquisition module is used for acquiring a template image, and the template image is used for recording a history area passed by a human hand.
Further, in the template image, the pixel value of the area passed by the human hand is a first value, and the pixel value of the area not passed by the human hand is a second value.
Further, the image processing apparatus 400 further includes:
and the parameter setting module is used for setting the processing parameters, and the parameters determine the mode of processing the image.
The apparatus shown in fig. 4 can perform the method of the embodiment shown in fig. 1 and 2, and the detailed description of this embodiment can refer to the related description of the embodiment shown in fig. 1 and 2. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 1 and fig. 2, and are not described herein again.
Referring now to FIG. 5, a block diagram of an electronic device 500 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 501.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising the at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects the internet protocol addresses from the at least two internet protocol addresses and returns the internet protocol addresses; receiving an internet protocol address returned by the node evaluation equipment; wherein the obtained internet protocol address indicates an edge node in the content distribution network.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from the at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (13)

1. An image processing method, comprising:
acquiring a video image;
identifying hands in the video image to obtain hand information;
tracking the movement of the human hand according to the human hand information, and determining the area through which the human hand passes;
processing the image in the region.
2. The image processing method of claim 1, wherein the recognizing the human hand in the video image to obtain human hand information comprises:
and identifying the hands in the video, and acquiring the positions of the hands and key points of the hands.
3. The image processing method of claim 2, wherein the recognizing the human hand in the video image to obtain human hand information further comprises:
and obtaining the outline area of the human hand according to the human hand key points.
4. The image processing method according to claim 1, wherein said determining the region that the human hand passes by tracking the movement of the human hand based on the human hand information comprises:
determining the moving track of the human hand according to the changing track of the same human hand information;
and determining the area passed by the human hand according to the moving track.
5. The image processing method of claim 1, wherein the video image comprises a foreground image and a background image, and wherein the processing the image in the region comprises:
and performing first processing on the foreground image in the region, and mixing the processed foreground image with the background image.
6. The image processing method of claim 1, wherein the video image comprises a foreground image and a background image, and wherein the processing the image in the region comprises:
and performing first processing on the foreground image in the region, performing second processing on the background image, and mixing the processed foreground image and the processed background image.
7. The image processing method according to claim 1, wherein said determining the region that the human hand passes by tracking the movement of the human hand based on the human hand information comprises:
recognizing the hand gesture of the hand according to the hand information;
when the gesture is a first gesture, the movement of the hand is tracked, and the area where the hand passes is determined according to the hand information of the first gesture.
8. The image processing method according to claim 1, before tracking the movement of the human hand based on the human hand information to determine the region through which the human hand passes, further comprising:
and acquiring a template image, wherein the template image is used for recording a history area passed by a human hand.
9. The image processing method according to claim 8, characterized in that: in the template image, the pixel value of the area passed by the human hand is a first value, and the pixel value of the area not passed by the human hand is a second value.
10. The image processing method of claim 1, prior to said acquiring a video image, further comprising:
setting the processing parameters, wherein the parameters determine the mode of processing the image.
11. An image processing apparatus characterized by comprising:
the image acquisition module is used for acquiring a video image;
the human hand information acquisition module is used for identifying human hands in the video image to obtain human hand information;
the region determining module is used for tracking the movement of the human hand according to the human hand information and determining a region through which the human hand passes;
and the image processing module is used for processing the image in the area.
12. An electronic device, comprising:
a memory for storing non-transitory computer readable instructions; and
a processor for executing the computer readable instructions such that the processor when executing implements the image processing method according to any of claims 1-10.
13. A computer-readable storage medium storing non-transitory computer-readable instructions which, when executed by a computer, cause the computer to perform the image processing method of any one of claims 1-10.
CN201811368722.7A 2018-11-16 2018-11-16 Image processing method and device Pending CN111199169A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811368722.7A CN111199169A (en) 2018-11-16 2018-11-16 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811368722.7A CN111199169A (en) 2018-11-16 2018-11-16 Image processing method and device

Publications (1)

Publication Number Publication Date
CN111199169A true CN111199169A (en) 2020-05-26

Family

ID=70745811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811368722.7A Pending CN111199169A (en) 2018-11-16 2018-11-16 Image processing method and device

Country Status (1)

Country Link
CN (1) CN111199169A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222582A (en) * 2021-05-10 2021-08-06 广东便捷神科技股份有限公司 Face payment retail terminal
CN113744414A (en) * 2021-09-06 2021-12-03 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
CN114185429A (en) * 2021-11-11 2022-03-15 杭州易现先进科技有限公司 Method for positioning gesture key points or estimating gesture, electronic device and storage medium
CN114598823A (en) * 2022-03-11 2022-06-07 北京字跳网络技术有限公司 Special effect video generation method and device, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040017473A1 (en) * 2002-07-27 2004-01-29 Sony Computer Entertainment Inc. Man-machine interface using a deformable device
CN102426480A (en) * 2011-11-03 2012-04-25 康佳集团股份有限公司 Man-machine interactive system and real-time gesture tracking processing method for same
CN104463782A (en) * 2013-09-16 2015-03-25 联想(北京)有限公司 Image processing method, device and electronic apparatus
TW201514830A (en) * 2013-10-08 2015-04-16 Univ Nat Taiwan Science Tech Interactive operation method of electronic apparatus
CN104866805A (en) * 2014-02-20 2015-08-26 腾讯科技(深圳)有限公司 Real-time face tracking method and device
CN105554364A (en) * 2015-07-30 2016-05-04 宇龙计算机通信科技(深圳)有限公司 Image processing method and terminal
CN105857180A (en) * 2016-05-09 2016-08-17 广西大学 Hazy weather vehicle driving auxiliary system and method
CN106971165A (en) * 2017-03-29 2017-07-21 武汉斗鱼网络科技有限公司 The implementation method and device of a kind of filter
CN108830892A (en) * 2018-06-13 2018-11-16 北京微播视界科技有限公司 Face image processing process, device, electronic equipment and computer readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040017473A1 (en) * 2002-07-27 2004-01-29 Sony Computer Entertainment Inc. Man-machine interface using a deformable device
CN102426480A (en) * 2011-11-03 2012-04-25 康佳集团股份有限公司 Man-machine interactive system and real-time gesture tracking processing method for same
CN104463782A (en) * 2013-09-16 2015-03-25 联想(北京)有限公司 Image processing method, device and electronic apparatus
TW201514830A (en) * 2013-10-08 2015-04-16 Univ Nat Taiwan Science Tech Interactive operation method of electronic apparatus
CN104866805A (en) * 2014-02-20 2015-08-26 腾讯科技(深圳)有限公司 Real-time face tracking method and device
CN105554364A (en) * 2015-07-30 2016-05-04 宇龙计算机通信科技(深圳)有限公司 Image processing method and terminal
CN105857180A (en) * 2016-05-09 2016-08-17 广西大学 Hazy weather vehicle driving auxiliary system and method
CN106971165A (en) * 2017-03-29 2017-07-21 武汉斗鱼网络科技有限公司 The implementation method and device of a kind of filter
CN108830892A (en) * 2018-06-13 2018-11-16 北京微播视界科技有限公司 Face image processing process, device, electronic equipment and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHIGENG PAN等: "A real-time multi-cue hand tracking algorithm based on computer vision", pages 219 - 222 *
沈翔;: "基于光学的多点触摸交互系统技术研究", no. 5, pages 82 - 85 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222582A (en) * 2021-05-10 2021-08-06 广东便捷神科技股份有限公司 Face payment retail terminal
CN113744414A (en) * 2021-09-06 2021-12-03 北京百度网讯科技有限公司 Image processing method, device, equipment and storage medium
CN114185429A (en) * 2021-11-11 2022-03-15 杭州易现先进科技有限公司 Method for positioning gesture key points or estimating gesture, electronic device and storage medium
CN114185429B (en) * 2021-11-11 2024-03-26 杭州易现先进科技有限公司 Gesture key point positioning or gesture estimating method, electronic device and storage medium
CN114598823A (en) * 2022-03-11 2022-06-07 北京字跳网络技术有限公司 Special effect video generation method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110210571B (en) Image recognition method and device, computer equipment and computer readable storage medium
CN110517319B (en) Method for determining camera attitude information and related device
CN108229277B (en) Gesture recognition method, gesture control method, multilayer neural network training method, device and electronic equipment
US10043308B2 (en) Image processing method and apparatus for three-dimensional reconstruction
CN110287891B (en) Gesture control method and device based on human body key points and electronic equipment
CN111199169A (en) Image processing method and device
CN110570460B (en) Target tracking method, device, computer equipment and computer readable storage medium
CN108830186B (en) Text image content extraction method, device, equipment and storage medium
CN110070551B (en) Video image rendering method and device and electronic equipment
CN112749613B (en) Video data processing method, device, computer equipment and storage medium
CN110084154B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN110069125B (en) Virtual object control method and device
CN111062981A (en) Image processing method, device and storage medium
CN111950570B (en) Target image extraction method, neural network training method and device
CN112232311B (en) Face tracking method and device and electronic equipment
CN110069126B (en) Virtual object control method and device
CN111353325A (en) Key point detection model training method and device
CN110858409A (en) Animation generation method and device
CN109981989B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN110222576B (en) Boxing action recognition method and device and electronic equipment
CN110047126B (en) Method, apparatus, electronic device, and computer-readable storage medium for rendering image
CN110197459B (en) Image stylization generation method and device and electronic equipment
CN111258413A (en) Control method and device of virtual object
CN110941327A (en) Virtual object display method and device
CN110232417B (en) Image recognition method and device, computer equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Country or region after: China

Address after: 100080 408, 4th floor, 51 Zhichun Road, Haidian District, Beijing

Applicant after: Tiktok Technology Co.,Ltd.

Address before: 100080 408, 4th floor, 51 Zhichun Road, Haidian District, Beijing

Applicant before: BEIJING MICROLIVE VISION TECHNOLOGY Co.,Ltd.

Country or region before: China

CB02 Change of applicant information