WO2020077912A1 - Image processing method, device, and hardware device - Google Patents

Image processing method, device, and hardware device Download PDF

Info

Publication number
WO2020077912A1
WO2020077912A1 PCT/CN2019/073082 CN2019073082W WO2020077912A1 WO 2020077912 A1 WO2020077912 A1 WO 2020077912A1 CN 2019073082 W CN2019073082 W CN 2019073082W WO 2020077912 A1 WO2020077912 A1 WO 2020077912A1
Authority
WO
WIPO (PCT)
Prior art keywords
contour feature
feature point
inner contour
target object
point
Prior art date
Application number
PCT/CN2019/073082
Other languages
French (fr)
Chinese (zh)
Inventor
范旭
李琰
杨辉
沈言浩
Original Assignee
北京微播视界科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京微播视界科技有限公司 filed Critical 北京微播视界科技有限公司
Publication of WO2020077912A1 publication Critical patent/WO2020077912A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Definitions

  • the present disclosure relates to the field of image processing, and in particular, to an image processing method, device, and hardware device.
  • APP Application, referred to as: APP
  • an APP that can realize functions such as dark light detection, beauty camera and super pixels.
  • the beautification function of the smart terminal usually includes beautification processing effects such as skin tone adjustment, dermabrasion, big eyes, and thin face, and can perform the same degree of beautification processing on all the faces recognized in the image.
  • beautification processing effects such as skin tone adjustment, dermabrasion, big eyes, and thin face, and can perform the same degree of beautification processing on all the faces recognized in the image.
  • APPs that can achieve simple special effects.
  • the current special effects function can only pre-set the special effects and synthesize them into the video or image. If you need to modify the special effects, you need to re-create the special effects and then synthesize them into the video or image, making the special effects very inflexible.
  • An image processing method includes: segmenting a target image to obtain a contour of a target object; generating an inner contour feature point of the target object according to the contour of the target object; generating an outer contour feature point according to the inner contour feature point Filling the preset material into the area between the inner contour feature point and the outer contour feature point;
  • the generating the inner contour feature point of the target object according to the contour of the target object includes generating the inner contour feature point along the contour line of the target object.
  • the generating an outer contour feature point according to the inner contour feature point includes generating an outer contour feature point in a direction of the inner contour feature point away from the target object according to the inner contour feature point.
  • the generating the outer contour feature point in the direction of the inner contour feature point away from the target object according to the inner contour feature point includes: the first inner contour feature point and The line segment connected by the second inner contour feature point is a vertical line, and the first inner contour feature point and the second inner contour feature point are adjacent inner contour feature points; in the direction of the vertical line away from the target object Take the first point, the length of the line segment between the first point and the first inner contour feature point is a predetermined length, and use the first point as the outer contour feature corresponding to the first inner contour feature point point.
  • the generating an outer contour feature point in the direction of the inner contour feature point away from the target object according to the inner contour feature point further includes: generating two different outer contour features via the third inner contour feature point Points are the first auxiliary outer contour feature point and the second auxiliary outer contour feature point, where the third inner contour point is the inner contour feature point at the inflection point of the contour; the first auxiliary outer contour feature point and the second outer contour feature The intersection of the line where the point is located and the line where the second auxiliary outer contour feature point and the fourth outer contour feature point are located, wherein the second outer contour feature point is the outer contour feature point corresponding to the second inner contour feature point, so
  • the fourth outer contour feature point is an outer frame contour feature point corresponding to a fourth inner contour feature point, wherein the second inner contour feature point and the fourth inner contour feature point are associated with the third inner contour feature Two inner contour feature points adjacent to the point; use the intersection point as the third outer contour feature point corresponding to the third inner contour feature point.
  • the distance between the inner contour feature point and the outer contour feature point corresponding to the inner contour feature point is the width of the material.
  • the filling of the preset material into the area between the inner contour feature point and the outer contour feature point includes: filling the material into two adjacent inner contour feature points and the The filling operation in the area formed by the two adjacent outer contour feature points corresponding to the two adjacent inner contour feature points; repeat the above filling operation until all the inner contour feature points and the outer contour features The areas between the points are filled with the material.
  • the segmentation of the target image to obtain the outline of the target object includes: acquiring a video; segmenting the video frame image in the video; separating the target object in the video frame image from other objects to obtain The outline of the target object.
  • the method further includes: setting a correspondence between the preset material and the target image.
  • An image processing device including:
  • the contour acquisition module is used to segment the target image to obtain the contour of the target object
  • An inner contour feature point generating module configured to generate an inner contour feature point of the target object according to the contour of the target object
  • An outer contour feature point generating module configured to generate an outer contour feature point according to the inner contour feature point
  • the filling module is used to fill the preset material into the area between the inner contour feature point and the outer contour feature point.
  • the inner contour feature point generating module is configured to generate the inner contour feature point along the contour line of the target object.
  • the outer contour feature point generating module includes an outer contour feature point generating sub-module for generating an outer contour feature point in a direction of the inner contour feature point away from the target object according to the inner contour feature point.
  • the outer contour feature point generating sub-module is used to make a vertical line through the first inner contour feature point to a line segment connected by the first inner contour feature point and the second inner contour feature point, the first inner An outline feature point and the second inner outline feature point are adjacent inner outline feature points; a first point is taken in the direction of the vertical line away from the target object, the first point and the first inner outline feature point The length of the line segment between the points is a predetermined length, and the first point is used as the outer contour feature point corresponding to the first inner contour feature point.
  • the outer contour feature point generation sub-module is also used to generate two different outer contour feature points via the third inner contour feature point, which are the first auxiliary outer contour feature point and the second auxiliary outer contour feature, respectively Point, where the third inner contour point is the inner contour feature point at the inflection point of the contour; calculate the line where the first auxiliary outer contour feature point and the second outer contour feature point are located, and the second auxiliary outer contour feature point and the fourth outer contour feature The intersection point of the straight line where the point is located, wherein the second outer contour feature point is the outer contour feature point corresponding to the second inner contour feature point, and the fourth outer contour feature point is the outer contour feature point corresponding to the fourth inner contour feature point A frame outline feature point, wherein the second inner outline feature point and the fourth inner outline feature point are two inner outline feature points adjacent to the third inner outline feature point; The third outer contour feature point corresponding to the third inner contour feature point.
  • the distance between the inner contour feature point and the outer contour feature point corresponding to the inner contour feature point is the width of the material.
  • the filling module is configured to fill the material into two adjacent inner contour feature points and two adjacent outer contour feature points corresponding to the two adjacent inner contour feature points The filling operation in the area of the above; repeat the above filling operation until all the areas between the inner contour feature point and the outer contour feature point are filled with the material.
  • the contour acquisition module is used to acquire video; segment the video frame image in the video; separate the target object in the video frame image from other objects to obtain the contour of the target object.
  • the image processing device further includes a correspondence relationship setting module, configured to set a correspondence relationship between the preset material and the target image.
  • An electronic device includes: a memory for storing non-transitory computer-readable instructions; and a processor for running the computer-readable instructions so that the processor executes any of the above image processing methods when executed A step of.
  • a computer-readable storage medium is used to store non-transitory computer-readable instructions.
  • the non-transitory computer-readable instructions are executed by a computer, the computer is caused to perform the steps described in any of the above methods.
  • the present disclosure discloses an image processing method, device, and hardware device.
  • the image processing method includes: segmenting the target image to obtain the contour of the target object; generating an inner contour feature point of the target object according to the contour of the target object; generating an outer contour feature point according to the inner contour feature point Filling the preset material into the area between the inner contour feature point and the outer contour feature point;
  • the target object to be processed can be segmented from the image, and the material can be added to the relevant area of the target object to form a special effect. When modifying the special effect, only the material needs to be modified without re-editing the image again , Improve the efficiency and flexibility of special effects production.
  • FIG. 1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of a method for generating an outline feature point according to the present disclosure
  • FIG. 3 is a schematic diagram of a material filling method according to the present disclosure.
  • FIG. 4 is a schematic diagram of the effect after an image is processed by an image processing method according to an embodiment of the present disclosure
  • FIG. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • An embodiment of the present disclosure provides an image processing method.
  • the image processing method provided in this embodiment may be executed by a computing device, which may be implemented as software or a combination of software and hardware.
  • the computing device may be integrated in a server, a terminal device, or the like.
  • the image processing method mainly includes the following steps S101 to S104. among them:
  • Step S101 Segment the target image to obtain the outline of the target object
  • the target image may be any image.
  • the target image is a picture, where the target image includes a target object, and the target object may be any object.
  • the target The object is the human body.
  • the target image is segmented, and the objects in the image are segmented and separated from other objects to obtain the outline of the target object.
  • the target image when the target image is a video, the video needs to be acquired first; the video frame image in the video is segmented; the target object in the video frame image is separated from other objects to obtain the target object ’s profile.
  • Image segmentation is generally divided into interactive image segmentation and automatic image segmentation.
  • Traditional image processing generally uses interactive image segmentation, which requires human participation in image segmentation.
  • automatic image segmentation is used, and the following uses human body image segmentation as an example to describe automatic image segmentation.
  • automatic human image segmentation methods can be divided into the following types: (1) Model-based human image segmentation methods. For this method, the human face is first detected based on the prior knowledge of the human face, and then the torso model is used to find The torso under the human face, then estimate the position of the lower body according to the segmented torso, and finally use the estimated torso and upper leg regions to provide seed points for image segmentation to complete the segmentation of the human body image; (2) based on the hierarchical tree Human body image segmentation method.
  • the adjacent body parts are first modeled, then the entire human body pose is modeled, and the different body poses are modeled as the sum of nodes on different paths in the hierarchical detection tree, Different layers in the hierarchical detection tree correspond to different models of adjacent human parts, along different paths on the hierarchical detection tree, corresponding to different human postures, are detected down the root node of the tree during detection, and are segmented along different paths Different postures of the human body; (3) based on the independent component analysis of the reference signal, the human body image segmentation method, for this method, the first The human face is detected according to the prior knowledge of the human face, then the torso model is used to find the torso under the human face, and then the reference signal is obtained from the detected torso, and then the torso is highlighted from the image using the independent component analysis method of the reference signal Complete the segmentation of the torso, the segmentation of other body parts is similar, and finally complete the segmentation of the entire human body image; (4) The human body image segmentation method based on
  • Step S102 Generate an inner contour feature point of the target object according to the contour of the target object
  • an inner contour feature point of the target object is generated according to the contour of the target object obtained in step S101, and the inner contour feature point may be located directly on the contour line of the target object, or maintained at a predetermined distance from the contour line, For example, the inner contour feature point can be kept at a distance of 0.1 cm from the contour line. In one embodiment, the distance between the inner contour feature points is the same, that is to say, the inner contour feature points are evenly distributed with respect to the contour of the target object.
  • Step S103 generating an outer contour feature point according to the inner contour feature point
  • an outer contour feature point is generated according to the inner contour feature point generated in step S102.
  • the inner contour feature point is generated in a direction away from the target object according to the inner contour feature point
  • the generating process may be an interpolation process. Taking the target object as a human body for example, the inner contour feature points are located on the contour line of the human body, and for each inner contour feature point, an outer contour feature point corresponding to it is generated outside the human body.
  • first inner contour feature point and the second inner contour feature point are perpendicular to a line segment connected by the first inner contour feature point and the second inner contour feature point, and the first inner contour feature point and the second inner contour feature point
  • the contour feature points are adjacent inner contour feature points; the first point is taken in the direction of the vertical line away from the target object, and the length of the line segment between the first point and the first inner contour feature point is a predetermined length , Using the first point as the outer contour feature point corresponding to the first inner contour feature point. After that, the above two steps may be repeated until all the first inner contour points generate corresponding outer contour feature points.
  • the third inner contour feature point which are the first auxiliary outer contour feature point and the second auxiliary outer contour feature point, where the third inner contour point is the inner contour feature point at the contour inflection point ; Calculate the intersection of the line where the first auxiliary outer contour feature point and the second outer contour feature point are located and the line where the second auxiliary outer contour feature point and the fourth outer contour feature point are located, where the second outer contour feature point is An outer contour feature point corresponding to a second inner contour feature point, the fourth outer contour feature point is an outer frame contour feature point corresponding to a fourth inner contour feature point, wherein the second inner contour feature point and the The fourth inner contour feature point is two inner contour feature points adjacent to the third inner contour feature point; the intersection point is used as the third outer contour feature point corresponding to the third inner contour feature point.
  • the inner contour feature points include points 1, 2, 3, 4, 5, and 6, of which points 1, 2, 3, 5, 6
  • point 4 is the contour inflection point of the target object, which corresponds to the third inner contour point in the above embodiment.
  • point 2 Take point 2 as an example, make a vertical line to line segment 12 through point 2, and take a point b in the direction of the vertical line away from the target object, and make the length of line segment 1b H, where H is the preset length, for each
  • H is the preset length
  • point 2 in addition to point 1 being adjacent to it, there is also point 3 adjacent to it, so do it after point 2 A vertical line, making it perpendicular to the line segment 23, in this embodiment, point 1, point 2 and point 3 are located on the same straight line, so the vertical line of line segment 12 passing through point 2 coincides with the vertical line of line segment 23,
  • the two points b obtained at this time coincide, and it can be determined that the point b is the outer contour feature point corresponding to the point 2, thus repeating the above operation for each point can obtain the inner contour feature points 1, 2, 3, 5, 6
  • Step S104 Fill the preset material into the area between the inner contour feature point and the outer contour feature point.
  • the preset material may be a color card with a fixed size, and in this step, the material is filled into the area between the inner contour feature point and the outer contour feature point, To form a stroke on the target object.
  • the filling process is: the material is filled into two adjacent inner contour feature points and two adjacent outer contour feature points corresponding to the two adjacent inner contour feature points In the composed area; repeat the above filling operation until all the areas between the inner contour feature point and the outer contour feature point are filled with the material.
  • Figure 3 shows an example of material filling.
  • the length of the material is L
  • the width is H
  • the distance between the inner contour feature points is L
  • the distance between the outer contour feature points is H
  • the area 1ab2 is the size of a material, and the preset material just fills it, as shown in the shaded area of the area 1ab2 in FIG. 3.
  • the length and width attributes of the preset material are obtained in advance, and when the inner contour feature point is generated in step S102, the inner contour feature point is sampled on the contour of the target object using the length of the preset material as the sampling distance
  • the outer contour feature points of the inner contour feature points are taken with the width of the preset material.
  • the distance between the inner contour feature points is not L, which may be n times or 1 / n times of L. At this time, when the material is filled, the material may be stretched or expanded. After filling.
  • the method before segmenting the target image to obtain the outline of the target object, the method further includes: setting a correspondence between the preset material and the target image.
  • multiple materials may be prepared in advance, corresponding to multiple target images.
  • the multiple target objects may be pictures or video frames of video.
  • the target image is a video frame of video
  • multiple The video frame sets the corresponding material to generate different stroke effects for each frame of video.
  • the stroke effect will change as the video plays.
  • the method before segmenting the target image to obtain the outline of the target object, the method further includes: selecting the target object.
  • the target object may be any object that can be segmented from the target image.
  • the target object may be a human body, various animals such as cats, dogs, plants, buildings, and the like.
  • different object segmentation algorithms are called, and users can flexibly adjust the objects that need to be segmented.
  • the target image before segmenting the target image to obtain the outline of the target object, it further includes: selecting the serial number of the target object to be segmented.
  • the serial number of the target object that needs to be processed can be set in advance, if the serial number is set to 1, the first The segmented human body performs image processing in the present disclosure. If the setting needs to be 0, then the image processing in the present disclosure is performed on all segmented human bodies.
  • the display properties of the outline can be set, for example, a certain section of the outline can be set not to be displayed in a certain frame or some frames or a certain section of the outline is displayed randomly, etc., so that the effect of the material flickering appears.
  • the target object is a human body
  • the human body is stroked to highlight the position of the human body in the image .
  • the present disclosure discloses an image processing method, device, and hardware device.
  • the image processing method includes: segmenting the target image to obtain the contour of the target object; generating an inner contour feature point of the target object according to the contour of the target object; generating an outer contour feature point according to the inner contour feature point Filling the preset material into the area between the inner contour feature point and the outer contour feature point;
  • the target object to be processed can be segmented from the image, and the material can be added to the relevant area of the target object to form a special effect.
  • the special effect only the material needs to be modified without re-editing the image , Improve the efficiency and flexibility of special effects production.
  • the following is a device embodiment of the present disclosure.
  • the device embodiment of the present disclosure can be used to perform the steps implemented by the method embodiments of the present disclosure.
  • Only parts related to the embodiment of the present disclosure are shown. Specific technical details are not disclosed. Please Refer to the method embodiment of the present disclosure.
  • an embodiment of the present disclosure provides an image processing device.
  • the device may perform the steps described in the above embodiments of the image processing method.
  • the device 500 mainly includes a contour acquisition module 501, an inner contour feature point generation module 502, an outer contour feature point generation module 503 and a filling module 504. among them,
  • the contour acquisition module 501 is used to segment the target image to obtain the contour of the target object
  • An inner contour feature point generating module 502 configured to generate an inner contour feature point of the target object according to the contour of the target object;
  • the filling module 504 is used to fill the preset material into the area between the inner contour feature point and the outer contour feature point.
  • the inner contour feature point generating module 502 is configured to generate the inner contour feature point along the contour line of the target object, wherein the distance between adjacent inner contour feature points is the same.
  • the distance between the adjacent inner contour feature points is the length of the material.
  • the outer contour feature point generation module 503 includes an outer contour feature point generation sub-module for generating an outer contour feature point in a direction of the inner contour feature point away from the target object according to the inner contour feature point.
  • the outer contour feature point generating sub-module makes a vertical line through the first inner contour feature point to a line segment connected by the first inner contour feature point and the second inner contour feature point, and the first inner contour feature
  • the point and the second inner contour feature point are adjacent inner contour feature points; the first point is taken in the direction of the vertical line away from the target object, the first point and the first inner contour feature point
  • the length of the line segment between is a predetermined length, and the first point is used as the outer contour feature point corresponding to the first inner contour feature point.
  • the outer contour feature point generation sub-module is also used to generate two different outer contour feature points via the third inner contour feature point, which are the first auxiliary outer contour feature point and the second auxiliary outer contour feature, respectively Point, where the third inner contour point is the inner contour feature point at the inflection point of the contour; calculate the line where the first auxiliary outer contour feature point and the second outer contour feature point are located, and the second auxiliary outer contour feature point and the fourth outer contour feature The intersection point of the straight line where the point is located, wherein the second outer contour feature point is the outer contour feature point corresponding to the second inner contour feature point, and the fourth outer contour feature point is the outer contour feature point corresponding to the fourth inner contour feature point A frame outline feature point, wherein the second inner outline feature point and the fourth inner outline feature point are two inner outline feature points adjacent to the third inner outline feature point; The third outer contour feature point corresponding to the third inner contour feature point.
  • the distance between the inner contour feature point and the outer contour feature point corresponding to the inner contour feature point is the width of the material.
  • the filling module 504 is configured to fill the material to two adjacent inner contour feature points and two adjacent outer contour feature points corresponding to the two adjacent inner contour feature points Filling operation in the composed area; repeating the above filling operation until all areas between the inner contour feature point and the outer contour feature point are filled with the material.
  • the contour acquisition module 501 is used to acquire a video; segment the video frame image in the video; and separate the target object in the video frame image from other objects to obtain the contour of the target object.
  • the image processing apparatus 500 further includes a correspondence relationship setting module, which is used to set a correspondence relationship between the preset material and the target image.
  • the device shown in FIG. 5 can execute the method of the embodiment shown in FIG.
  • FIG. 6 shows a schematic structural diagram of an electronic device 600 suitable for implementing embodiments of the present disclosure.
  • Electronic devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g. Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers and the like.
  • PDAs personal digital assistants
  • PADs tablet computers
  • PMPs portable multimedia players
  • in-vehicle terminals e.g. Mobile terminals such as car navigation terminals
  • fixed terminals such as digital TVs, desktop computers and the like.
  • the electronic device shown in FIG. 6 is only an example, and should not bring any limitation to the functions and use scope of the embodiments of the present disclosure.
  • the electronic device 600 may include a processing device (such as a central processing unit, a graphics processor, etc.) 601, which may be loaded into random access according to a program stored in a read only memory (ROM) 602 or from the storage device 608
  • a processing device such as a central processing unit, a graphics processor, etc.
  • the program in the memory (RAM) 603 performs various appropriate operations and processes.
  • various programs and data necessary for the operation of the electronic device 600 are also stored.
  • the processing device 601, ROM 602, and RAM 603 are connected to each other via a bus 604.
  • An input / output (I / O) interface 605 is also connected to the bus 604.
  • the following devices can be connected to the I / O interface 605: include input devices 606 including, for example, touch screens, touch pads, keyboards, mice, image sensors, microphones, accelerometers, gyroscopes, etc .; An output device 607 such as a vibrator; a storage device 608 including, for example, a magnetic tape, a hard disk, etc .; and a communication device 609.
  • the communication device 609 may allow the electronic device 600 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 4 shows an electronic device 600 having various devices, it should be understood that it is not required to implement or have all the devices shown. More or fewer devices may be implemented or provided instead.
  • the process described above with reference to the flowchart may be implemented as a computer software program.
  • embodiments of the present disclosure include a computer program product that includes a computer program carried on a computer-readable medium, the computer program containing program code for performing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 609, or from the storage device 608, or from the ROM 602.
  • the processing device 601 the above-mentioned functions defined in the method of the embodiments of the present disclosure are executed.
  • the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer diskettes, hard drives, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • the computer-readable signal medium may include a data signal that is propagated in baseband or as part of a carrier wave, in which computer-readable program code is carried. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • the computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable signal medium may send, propagate, or transmit a program for use by or in combination with an instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium may be transmitted using any appropriate medium, including but not limited to: electric wire, optical cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
  • the computer-readable medium may be included in the electronic device; or it may exist alone without being assembled into the electronic device.
  • the computer-readable medium carries one or more programs, and when the one or more programs are executed by the electronic device, the electronic device is caused to: segment the target image to obtain the outline of the target object; according to the target object The contour generates an inner contour feature point of the target object; generates an outer contour feature point according to the inner contour feature point; fills a preset material into the area between the inner contour feature point and the outer contour feature point .
  • the computer program code for performing the operations of the present disclosure can be written in one or more programming languages or a combination thereof.
  • the above programming languages include object-oriented programming languages such as Java, Smalltalk, C ++, and also include conventional Procedural programming language-such as "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as an independent software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, through an Internet service provider Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider Internet connection for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of code that contains one or more logic functions Executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession can actually be executed in parallel, and sometimes they can also be executed in reverse order, depending on the functions involved.
  • each block in the block diagrams and / or flowcharts, and combinations of blocks in the block diagrams and / or flowcharts can be implemented with dedicated hardware-based systems that perform specified functions or operations Or, it can be realized by a combination of dedicated hardware and computer instructions.
  • the units described in the embodiments of the present disclosure may be implemented in software or hardware. Among them, the name of the unit does not constitute a limitation on the unit itself under certain circumstances.

Abstract

Disclosed are an image processing method, device, and hardware device. The image processing method comprises: segmenting a target image to obtain an outline of a target object; generating inner outline feature points of the target object according to the outline of the target object; generating outer outline feature points according to the inner outline feature point; filling a preset material into a region between the inner outline feature points and the outer outline feature points. In the image processing method of the embodiments of the present disclosure, it is possible to segment from an image a target object which needs to be processed and add material to a relevant region of the target object to form a special effect; when modifying the special effect, it is necessary only to modify the material and there is no need to re-edit the image again, thus efficiency and flexibility in producing a special effect is improved.

Description

图像处理方法、装置、硬件装置Image processing method, device and hardware device
交叉引用cross reference
本公开引用于2018年10月19日递交的名称为“图像处理方法、装置、硬件装置”的、申请号为201811222643.5的中国专利申请,其通过引用被全部并入本申请。This disclosure cites the Chinese patent application with the application number 201811222643.5 filed on October 19, 2018 and titled "Image Processing Method, Device, Hardware Device", which is fully incorporated by reference into this application.
技术领域Technical field
本公开涉及图像处理领域,特别是涉及一种图像处理方法、装置、硬件装置。The present disclosure relates to the field of image processing, and in particular, to an image processing method, device, and hardware device.
背景技术Background technique
随着计算机技术的发展,智能终端的应用范围得到了广泛的提高,例如可以通过智能终端听音乐、玩游戏、上网聊天和拍照等。对于智能终端的拍照技术来说,其拍照像素已经达到千万像素以上,具有较高的清晰度和媲美专业相机的拍照效果。With the development of computer technology, the application range of smart terminals has been widely improved. For example, you can listen to music, play games, chat online and take pictures through smart terminals. For the photographing technology of the smart terminal, its photographic pixels have reached more than ten million pixels, which has a higher definition and a photographic effect comparable to that of professional cameras.
目前在采用智能终端进行拍照时,不仅可以使用出厂时内置的拍照软件实现传统功能的拍照效果,还可以通过从网络端下载应用程序(Application,简称为:APP)来实现具有附加功能的拍照效果,例如可以实现暗光检测、美颜相机和超级像素等功能的APP。智能终端的美颜功能通常包括肤色调整、磨皮、大眼和瘦脸等美颜处理效果,能对图像中已识别出的所有人脸进行相同程度的美颜处理。目前也有APP可以实现简单的特效。At present, when taking pictures with a smart terminal, not only can the camera software built in the factory be used to achieve the traditional effect of taking pictures, but also the application with additional functions can be achieved by downloading the application (Application, referred to as: APP) from the network side , For example, an APP that can realize functions such as dark light detection, beauty camera and super pixels. The beautification function of the smart terminal usually includes beautification processing effects such as skin tone adjustment, dermabrasion, big eyes, and thin face, and can perform the same degree of beautification processing on all the faces recognized in the image. There are currently APPs that can achieve simple special effects.
然而目前的特效功能,只能预先设置好特效的效果,并合成到视频或者图像中,如果需要修改特效,则需要重新制作特效后再合成到视频或者图像中,使得特效的生成很不灵活。However, the current special effects function can only pre-set the special effects and synthesize them into the video or image. If you need to modify the special effects, you need to re-create the special effects and then synthesize them into the video or image, making the special effects very inflexible.
发明内容Summary of the invention
根据本公开的一个方面,提供以下技术方案:According to an aspect of the present disclosure, the following technical solutions are provided:
一种图像处理方法,包括:对目标图像进行分割以得到目标对象的轮 廓;根据所述目标对象的轮廓生成所述目标对象的内轮廓特征点;根据所述内轮廓特征点生成外轮廓特征点;将预设的素材填充到所述内轮廓特征点和所述外轮廓特征点之间的区域中。An image processing method includes: segmenting a target image to obtain a contour of a target object; generating an inner contour feature point of the target object according to the contour of the target object; generating an outer contour feature point according to the inner contour feature point Filling the preset material into the area between the inner contour feature point and the outer contour feature point;
进一步的,所述根据所述目标对象的轮廓生成目标对象的内轮廓特征点包括:沿所述目标对象的轮廓线生成所述内轮廓特征点。Further, the generating the inner contour feature point of the target object according to the contour of the target object includes generating the inner contour feature point along the contour line of the target object.
进一步的,相邻的内轮廓特征点之间的距离相同。。Further, the distance between adjacent inner contour feature points is the same. .
进一步的,所述根据所述内轮廓特征点生成外轮廓特征点包括:根据所述内轮廓特征点在所述内轮廓特征点的远离目标对象的方向上生成外轮廓特征点。Further, the generating an outer contour feature point according to the inner contour feature point includes generating an outer contour feature point in a direction of the inner contour feature point away from the target object according to the inner contour feature point.
进一步的,所述根据所述内轮廓特征点在所述内轮廓特征点的远离目标对象的方向上生成外轮廓特征点,包括:经第一内轮廓特征点向由第一内轮廓特征点和第二内轮廓特征点连接成的线段做垂线,所述第一内轮廓特征点和所述第二内轮廓特征点为相邻内轮廓特征点;在所述垂线的远离目标对象的方向上取第一点,所述第一点与所述第一内轮廓特征点之间的线段长度为预定长度,将所述第一点作为所述第一内轮廓特征点所对应的外轮廓特征点。Further, the generating the outer contour feature point in the direction of the inner contour feature point away from the target object according to the inner contour feature point includes: the first inner contour feature point and The line segment connected by the second inner contour feature point is a vertical line, and the first inner contour feature point and the second inner contour feature point are adjacent inner contour feature points; in the direction of the vertical line away from the target object Take the first point, the length of the line segment between the first point and the first inner contour feature point is a predetermined length, and use the first point as the outer contour feature corresponding to the first inner contour feature point point.
进一步的,所述根据所述内轮廓特征点在所述内轮廓特征点的远离目标对象的方向上生成外轮廓特征点,还包括:经第三内轮廓特征点生成两个不同的外轮廓特征点,分别为第一辅助外轮廓特征点和第二辅助外轮廓特征点,其中第三内轮廓点为轮廓拐点处的内轮廓特征点;计算第一辅助外轮廓特征点和第二外轮廓特征点所在的直线与第二辅助外轮廓特征点和第四外轮廓特征点所在的直线的交点,其中所述第二外轮廓特征点为与第二内轮廓特征点对应的外轮廓特征点,所述第四外轮廓特征点为与第四内轮廓特征点点对应的外框轮廓特征点,其中所述第二内轮廓特征点和所述第四内轮廓特征点是与所述第三内轮廓特征点相邻的两个内轮廓特征点;将所述交点作为与所述第三内轮廓特征点对应的第三外轮廓特征点。Further, the generating an outer contour feature point in the direction of the inner contour feature point away from the target object according to the inner contour feature point further includes: generating two different outer contour features via the third inner contour feature point Points are the first auxiliary outer contour feature point and the second auxiliary outer contour feature point, where the third inner contour point is the inner contour feature point at the inflection point of the contour; the first auxiliary outer contour feature point and the second outer contour feature The intersection of the line where the point is located and the line where the second auxiliary outer contour feature point and the fourth outer contour feature point are located, wherein the second outer contour feature point is the outer contour feature point corresponding to the second inner contour feature point, so The fourth outer contour feature point is an outer frame contour feature point corresponding to a fourth inner contour feature point, wherein the second inner contour feature point and the fourth inner contour feature point are associated with the third inner contour feature Two inner contour feature points adjacent to the point; use the intersection point as the third outer contour feature point corresponding to the third inner contour feature point.
进一步的,所述内轮廓特征点和与所述内轮廓特征点对应的外轮廓特征点之间的距离为所述素材的宽度。Further, the distance between the inner contour feature point and the outer contour feature point corresponding to the inner contour feature point is the width of the material.
进一步的,所述将预设的素材填充到所述内轮廓特征点和所述外轮廓特征点之间的区域中,包括:将所述素材填充到两个相邻的内轮廓特征点和与这两个相邻的内轮廓特征点对应的两个相邻的外轮廓特征点所组成的区域中的填充操作;重复上述填充操作,直到所有的所述内轮廓特征点和所述外轮廓特征点之间的区域均被所述素材填充。Further, the filling of the preset material into the area between the inner contour feature point and the outer contour feature point includes: filling the material into two adjacent inner contour feature points and the The filling operation in the area formed by the two adjacent outer contour feature points corresponding to the two adjacent inner contour feature points; repeat the above filling operation until all the inner contour feature points and the outer contour features The areas between the points are filled with the material.
进一步的,所述对目标图像进行分割以得到目标对象的轮廓,包括:获取视频;对所述视频中的视频帧图像进行分割;将所述视频帧图像中的目标对象与其他对象分离以得到目标对象的轮廓。Further, the segmentation of the target image to obtain the outline of the target object includes: acquiring a video; segmenting the video frame image in the video; separating the target object in the video frame image from other objects to obtain The outline of the target object.
进一步的,在对目标图像进行分割以得到目标对象的轮廓之前,还包括:设置预设素材与目标图像之间的对应关系。Further, before segmenting the target image to obtain the outline of the target object, the method further includes: setting a correspondence between the preset material and the target image.
根据本公开的另一个方面,还提供以下技术方案:According to another aspect of the present disclosure, the following technical solutions are also provided:
一种图像处理装置,包括:An image processing device, including:
轮廓获取模块,用于对目标图像进行分割以得到目标对象的轮廓;The contour acquisition module is used to segment the target image to obtain the contour of the target object;
内轮廓特征点生成模块,用于根据所述目标对象的轮廓生成所述目标对象的内轮廓特征点;An inner contour feature point generating module, configured to generate an inner contour feature point of the target object according to the contour of the target object;
外轮廓特征点生成模块,用于根据所述内轮廓特征点生成外轮廓特征点;An outer contour feature point generating module, configured to generate an outer contour feature point according to the inner contour feature point;
填充模块,用于将预设的素材填充到所述内轮廓特征点和所述外轮廓特征点之间的区域中。The filling module is used to fill the preset material into the area between the inner contour feature point and the outer contour feature point.
进一步的,所述内轮廓特征点生成模块,用于沿所述目标对象的轮廓线生成所述内轮廓特征点。Further, the inner contour feature point generating module is configured to generate the inner contour feature point along the contour line of the target object.
进一步的,相邻的内轮廓特征点之间的距离相同。Further, the distance between adjacent inner contour feature points is the same.
进一步的,外轮廓特征点生成模块包括外轮廓特征点生成子模块,用于根据所述内轮廓特征点在所述内轮廓特征点的远离目标对象的方向上生成外轮廓特征点。Further, the outer contour feature point generating module includes an outer contour feature point generating sub-module for generating an outer contour feature point in a direction of the inner contour feature point away from the target object according to the inner contour feature point.
进一步的,所述外轮廓特征点生成子模块,用于经第一内轮廓特征点向由第一内轮廓特征点和第二内轮廓特征点连接成的线段做垂线,所述第一内轮廓特征点和所述第二内轮廓特征点为相邻内轮廓特征点;在所述垂线的远离目标对象的方向上取第一点,所述第一点与所述第一内轮廓特征点之间的线段长度为预定长度,将所述第一点作为所述第一内轮廓特征点所对应的外轮廓特征点。Further, the outer contour feature point generating sub-module is used to make a vertical line through the first inner contour feature point to a line segment connected by the first inner contour feature point and the second inner contour feature point, the first inner An outline feature point and the second inner outline feature point are adjacent inner outline feature points; a first point is taken in the direction of the vertical line away from the target object, the first point and the first inner outline feature point The length of the line segment between the points is a predetermined length, and the first point is used as the outer contour feature point corresponding to the first inner contour feature point.
进一步的,所述外轮廓特征点生成子模块,还用于:经第三内轮廓特征点生成两个不同的外轮廓特征点,分别为第一辅助外轮廓特征点和第二辅助外轮廓特征点,其中第三内轮廓点为轮廓拐点处的内轮廓特征点;计算第一辅助外轮廓特征点和第二外轮廓特征点所在的直线与第二辅助外轮廓特征点和第四外轮廓特征点所在的直线的交点,其中所述第二外轮廓特征点 为与第二内轮廓特征点对应的外轮廓特征点,所述第四外轮廓特征点为与第四内轮廓特征点点对应的外框轮廓特征点,其中所述第二内轮廓特征点和所述第四内轮廓特征点是与所述第三内轮廓特征点相邻的两个内轮廓特征点;将所述交点作为与所述第三内轮廓特征点对应的第三外轮廓特征点。Further, the outer contour feature point generation sub-module is also used to generate two different outer contour feature points via the third inner contour feature point, which are the first auxiliary outer contour feature point and the second auxiliary outer contour feature, respectively Point, where the third inner contour point is the inner contour feature point at the inflection point of the contour; calculate the line where the first auxiliary outer contour feature point and the second outer contour feature point are located, and the second auxiliary outer contour feature point and the fourth outer contour feature The intersection point of the straight line where the point is located, wherein the second outer contour feature point is the outer contour feature point corresponding to the second inner contour feature point, and the fourth outer contour feature point is the outer contour feature point corresponding to the fourth inner contour feature point A frame outline feature point, wherein the second inner outline feature point and the fourth inner outline feature point are two inner outline feature points adjacent to the third inner outline feature point; The third outer contour feature point corresponding to the third inner contour feature point.
进一步的,所述内轮廓特征点和与所述内轮廓特征点对应的外轮廓特征点之间的距离为所述素材的宽度。Further, the distance between the inner contour feature point and the outer contour feature point corresponding to the inner contour feature point is the width of the material.
进一步的,所述填充模块,用于将所述素材填充到两个相邻的内轮廓特征点和与这两个相邻的内轮廓特征点对应的两个相邻的外轮廓特征点所组成的区域中的填充操作;重复上述填充操作,直到所有的所述内轮廓特征点和所述外轮廓特征点之间的区域均被所述素材填充。Further, the filling module is configured to fill the material into two adjacent inner contour feature points and two adjacent outer contour feature points corresponding to the two adjacent inner contour feature points The filling operation in the area of the above; repeat the above filling operation until all the areas between the inner contour feature point and the outer contour feature point are filled with the material.
进一步的,所述轮廓获取模块,用于获取视频;对视频中的视频帧图像进行分割;将所述视频帧图像中的目标对象与其他对象分离以得到目标对象的轮廓。Further, the contour acquisition module is used to acquire video; segment the video frame image in the video; separate the target object in the video frame image from other objects to obtain the contour of the target object.
进一步的,所述图像处理装置,还包括对应关系设置模块,用于设置预设素材与目标图像之间的对应关系。Further, the image processing device further includes a correspondence relationship setting module, configured to set a correspondence relationship between the preset material and the target image.
根据本公开的又一个方面,还提供以下技术方案:According to yet another aspect of the present disclosure, the following technical solutions are also provided:
一种电子设备,包括:存储器,用于存储非暂时性计算机可读指令;以及处理器,用于运行所述计算机可读指令,使得所述处理器执行时实现上述任一图像处理方法所述的步骤。An electronic device includes: a memory for storing non-transitory computer-readable instructions; and a processor for running the computer-readable instructions so that the processor executes any of the above image processing methods when executed A step of.
根据本公开的又一个方面,还提供以下技术方案:According to yet another aspect of the present disclosure, the following technical solutions are also provided:
一种计算机可读存储介质,用于存储非暂时性计算机可读指令,当所述非暂时性计算机可读指令由计算机执行时,使得所述计算机执行上述任一方法中所述的步骤。A computer-readable storage medium is used to store non-transitory computer-readable instructions. When the non-transitory computer-readable instructions are executed by a computer, the computer is caused to perform the steps described in any of the above methods.
本公开公开一种图像处理方法、装置、硬件装置。其中,该图像处理方法包括:对目标图像进行分割以得到目标对象的轮廓;根据所述目标对象的轮廓生成所述目标对象的内轮廓特征点;根据所述内轮廓特征点生成外轮廓特征点;将预设的素材填充到所述内轮廓特征点和所述外轮廓特征点之间的区域中。本公开实施例的图像处理方法,可以从图像中分割出需要处理的目标对象,并在目标对象的相关区域中添加素材以形成特效,修改特效时,只需要修改素材,无需对图像再次重新编辑,提高了特效制作的效率和灵活性。The present disclosure discloses an image processing method, device, and hardware device. Wherein, the image processing method includes: segmenting the target image to obtain the contour of the target object; generating an inner contour feature point of the target object according to the contour of the target object; generating an outer contour feature point according to the inner contour feature point Filling the preset material into the area between the inner contour feature point and the outer contour feature point; In the image processing method of the embodiment of the present disclosure, the target object to be processed can be segmented from the image, and the material can be added to the relevant area of the target object to form a special effect. When modifying the special effect, only the material needs to be modified without re-editing the image again , Improve the efficiency and flexibility of special effects production.
上述说明仅是本公开技术方案的概述,为了能更清楚了解本公开的技 术手段,而可依照说明书的内容予以实施,并且为让本公开的上述和其他目的、特征和优点能够更明显易懂,以下特举较佳实施例,并配合附图,详细说明如下。The above description is only an overview of the technical solutions of the present disclosure. In order to understand the technical means of the present disclosure more clearly, it can be implemented in accordance with the content of the specification, and to make the above and other purposes, features and advantages of the present disclosure more obvious and understandable In the following, the preferred embodiments are described in detail, together with the drawings, which are described in detail below.
附图说明BRIEF DESCRIPTION
图1为根据本公开一个实施例的图像处理方法的流程示意图;1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure;
图2为根据本公开的外轮廓特征点生成方法的示意图;2 is a schematic diagram of a method for generating an outline feature point according to the present disclosure;
图3为根据本公开的素材填充方法的示意图;3 is a schematic diagram of a material filling method according to the present disclosure;
图4为根据本公开的一个实施例的图像处理方法处理图像之后的效果示意图;4 is a schematic diagram of the effect after an image is processed by an image processing method according to an embodiment of the present disclosure;
图5为根据本公开一个实施例的图像处理装置的结构示意图;5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure;
图6为根据本公开实施例提供的电子设备的结构示意图。6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
具体实施方式detailed description
以下通过特定的具体实例说明本公开的实施方式,本领域技术人员可由本说明书所揭露的内容轻易地了解本公开的其他优点与功效。显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。本公开还可以通过另外不同的具体实施方式加以实施或应用,本说明书中的各项细节也可以基于不同观点与应用,在没有背离本公开的精神下进行各种修饰或改变。需说明的是,在不冲突的情况下,以下实施例及实施例中的特征可以相互组合。基于本公开中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。The following describes the embodiments of the present disclosure through specific specific examples. Those skilled in the art can easily understand other advantages and effects of the present disclosure from the contents disclosed in the present specification. Obviously, the described embodiments are only a part of the embodiments of the present disclosure, but not all the embodiments. The present disclosure can also be implemented or applied through other different specific embodiments, and various details in this specification can also be based on different viewpoints and applications, and various modifications or changes can be made without departing from the spirit of the present disclosure. It should be noted that the following embodiments and the features in the embodiments can be combined with each other without conflict. Based on the embodiments in the present disclosure, all other embodiments obtained by a person of ordinary skill in the art without creative work fall within the protection scope of the present disclosure.
需要说明的是,下文描述在所附权利要求书的范围内的实施例的各种方面。应显而易见,本文中所描述的方面可体现于广泛多种形式中,且本文中所描述的任何特定结构及/或功能仅为说明性的。基于本公开,所属领域的技术人员应了解,本文中所描述的一个方面可与任何其它方面独立地实施,且可以各种方式组合这些方面中的两者或两者以上。举例来说,可使用本文中所阐述的任何数目个方面来实施设备及/或实践方法。另外,可使用除了本文中所阐述的方面中的一或多者之外的其它结构及/或功能性实施此设备及/或实践此方法。It should be noted that various aspects of the embodiments within the scope of the appended claims are described below. It should be apparent that the aspects described herein may be embodied in a wide variety of forms, and any specific structures and / or functions described herein are merely illustrative. Based on the present disclosure, those skilled in the art should understand that one aspect described herein can be implemented independently of any other aspect, and two or more of these aspects can be combined in various ways. For example, any number of aspects set forth herein may be used to implement equipment and / or methods of practice. In addition, other apparatuses and / or functionalities in addition to one or more of the aspects set forth herein may be used to implement this apparatus and / or practice this method.
还需要说明的是,以下实施例中所提供的图示仅以示意方式说明本公开的基本构想,图式中仅显示与本公开中有关的组件而非按照实际实施时 的组件数目、形状及尺寸绘制,其实际实施时各组件的型态、数量及比例可为一种随意的改变,且其组件布局型态也可能更为复杂。It should also be noted that the illustrations provided in the following embodiments only illustrate the basic concept of the present disclosure in a schematic manner, and the drawings only show components related to the present disclosure rather than the number, shape, and shape of components in actual implementation. Dimensional drawing, the type, number and proportion of each component can be changed at will during its actual implementation, and its component layout type may also be more complicated.
另外,在以下描述中,提供具体细节是为了便于透彻理解实例。然而,所属领域的技术人员将理解,可在没有这些特定细节的情况下实践所述方面。In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, those skilled in the art will understand that the aspects can be practiced without these specific details.
本公开实施例提供一种图像处理方法。本实施例提供的该图像处理方法可以由一计算装置来执行,该计算装置可以实现为软件,或者实现为软件和硬件的组合,该计算装置可以集成设置在服务器、终端设备等中。如图1所示,该图像处理方法主要包括如下步骤S101至步骤S104。其中:An embodiment of the present disclosure provides an image processing method. The image processing method provided in this embodiment may be executed by a computing device, which may be implemented as software or a combination of software and hardware. The computing device may be integrated in a server, a terminal device, or the like. As shown in FIG. 1, the image processing method mainly includes the following steps S101 to S104. among them:
步骤S101:对目标图像进行分割以得到目标对象的轮廓;Step S101: Segment the target image to obtain the outline of the target object;
在该步骤中,目标图像可以是任何图像,在一个实施例中,所述目标图像为图片,其中目标图像中包括目标对象,所述目标对象可以是任何物体,在一个实施例中所述目标对象为人体。对目标图像进行分割,将图像中的对象分割出来,与其他对象分离,得到目标对象的轮廓。在一个实施例中,所述目标图像为视频时,此时需要首先获取视频;对视频中的视频帧图像进行分割;将所述视频帧图像中的目标对象与其他对象分离以得到目标对象的轮廓。In this step, the target image may be any image. In one embodiment, the target image is a picture, where the target image includes a target object, and the target object may be any object. In one embodiment, the target The object is the human body. The target image is segmented, and the objects in the image are segmented and separated from other objects to obtain the outline of the target object. In one embodiment, when the target image is a video, the video needs to be acquired first; the video frame image in the video is segmented; the target object in the video frame image is separated from other objects to obtain the target object ’s profile.
图像分割一般分为交互式图像分割和自动式图像分割,传统上的图像处理一般使用交互式图像分割,需要人为参与图像的分割。本公开中使用自动式图像分割,下边以人体图像分割为例,对自动式图像分割进行说明。Image segmentation is generally divided into interactive image segmentation and automatic image segmentation. Traditional image processing generally uses interactive image segmentation, which requires human participation in image segmentation. In this disclosure, automatic image segmentation is used, and the following uses human body image segmentation as an example to describe automatic image segmentation.
一般来说,自动式人体图像分割方法可以分为以下几种:(1)基于模型的人体图像分割方法,对于这种方法首先根据人脸的先验知识检测到人脸,之后使用躯干模型寻找人脸下边的躯干,然后根据分割好的躯干来估计下半身的位置,最后利用估计得躯干和腿部上半肢区域为图像分割提供种子点以完成人体图像的分割;(2)基于层级树的人体图像分割方法,对于这种方法,首先对邻近的身体部分进行建模,之后对整个人体姿势进行建模,将人体的不同姿势建模为层级检测树中的不同路径上节点的加和,层级检测树中不同的层对应不同的邻近人体部分的模型,沿着层级检测树上的不同的路径,对应不同的人体姿势,检测时沿着树的根节点向下检测,沿着不同路径分割出人体的不同姿势;(3)基于参考信号的独立成分分析人体图像分割方法,对于这种方法,首先根据人脸的先验知识检测到人脸,之后使用躯干模型寻找人脸下边的躯干,之后从检测到的躯干中获得参考信号,然后利用参考信号的独立成分分析方法将躯干从图像中凸显出现完成躯干的 分割,其他身体部分的分割类似,最终完成整个人体图像的分割;(4)基于期望最大化算法的人体图像分割方法,对于这种方法,首先使用图案结构模型对图像中的人体姿势进行估计,得到人体姿势的概率图,之后在概率图的基础上再使用图像分割方法得到最后的人体分割图像。当然还可以使用其他的人体图像分割方法,在本公开中不再赘述,任何图像分割方法均可以引入本公开中,用以从目标图像中分割出目标对象。In general, automatic human image segmentation methods can be divided into the following types: (1) Model-based human image segmentation methods. For this method, the human face is first detected based on the prior knowledge of the human face, and then the torso model is used to find The torso under the human face, then estimate the position of the lower body according to the segmented torso, and finally use the estimated torso and upper leg regions to provide seed points for image segmentation to complete the segmentation of the human body image; (2) based on the hierarchical tree Human body image segmentation method. For this method, the adjacent body parts are first modeled, then the entire human body pose is modeled, and the different body poses are modeled as the sum of nodes on different paths in the hierarchical detection tree, Different layers in the hierarchical detection tree correspond to different models of adjacent human parts, along different paths on the hierarchical detection tree, corresponding to different human postures, are detected down the root node of the tree during detection, and are segmented along different paths Different postures of the human body; (3) based on the independent component analysis of the reference signal, the human body image segmentation method, for this method, the first The human face is detected according to the prior knowledge of the human face, then the torso model is used to find the torso under the human face, and then the reference signal is obtained from the detected torso, and then the torso is highlighted from the image using the independent component analysis method of the reference signal Complete the segmentation of the torso, the segmentation of other body parts is similar, and finally complete the segmentation of the entire human body image; (4) The human body image segmentation method based on the expectation maximization algorithm, for this method, first use the pattern structure model to the human body pose in the image Estimate to obtain the probability map of human posture, and then use the image segmentation method to obtain the final human segmented image based on the probability map. Of course, other human image segmentation methods can also be used, which will not be described in detail in this disclosure, any image segmentation method can be introduced into this disclosure to segment the target object from the target image.
步骤S102:根据所述目标对象的轮廓生成所述目标对象的内轮廓特征点;Step S102: Generate an inner contour feature point of the target object according to the contour of the target object;
在该步骤中,根据步骤S101中得到的目标对象的轮廓,生成目标对象的内轮廓特征点,该内轮廓特征点可以直接位于目标对象的轮廓线上,或者与轮廓线保持一预定的距离,如内轮廓特征点可以与所述轮廓线保持0.1cm的距离。在一个实施例中,所述内轮廓特征点之间的距离相同,也就是说所述内轮廓特征点相对于目标对象的轮廓均匀分布。In this step, an inner contour feature point of the target object is generated according to the contour of the target object obtained in step S101, and the inner contour feature point may be located directly on the contour line of the target object, or maintained at a predetermined distance from the contour line, For example, the inner contour feature point can be kept at a distance of 0.1 cm from the contour line. In one embodiment, the distance between the inner contour feature points is the same, that is to say, the inner contour feature points are evenly distributed with respect to the contour of the target object.
步骤S103:根据所述内轮廓特征点生成外轮廓特征点;Step S103: generating an outer contour feature point according to the inner contour feature point;
在该步骤中,根据步骤S102中生成的内轮廓特征点,生成外轮廓特征点,在一个实施例中,根据所述内轮廓特征点在所述内轮廓特征点的远离目标对象的方向上生成外轮廓特征点,所述生成过程可以是插值的过程。以目标对象为人体为例,内轮廓特征点位于人体轮廓线上,对每个内轮廓特征点,在人体的外侧生成与之对应的外轮廓特征点。在一个实施例中,经第一内轮廓特征点向由第一内轮廓特征点和第二内轮廓特征点连接成的线段做垂线,所述第一内轮廓特征点和所述第二内轮廓特征点为相邻内轮廓特征点;在所述垂线的远离目标对象的方向上取第一点,所述第一点与所述第一内轮廓特征点之间的线段长度为预定长度,将所述第一点作为所述第一内轮廓特征点所对应的外轮廓特征点。之后可重复上述两个步骤直到所有的第一内轮廓点均生成与之对应的外轮廓特征点。经第三内轮廓特征点生成两个不同的外轮廓特征点,分别为第一辅助外轮廓特征点和第二辅助外轮廓特征点,其中第三内轮廓点为轮廓拐点处的内轮廓特征点;计算第一辅助外轮廓特征点和第二外轮廓特征点所在的直线与第二辅助外轮廓特征点和第四外轮廓特征点所在的直线的交点,其中所述第二外轮廓特征点为与第二内轮廓特征点对应的外轮廓特征点,所述第四外轮廓特征点为与第四内轮廓特征点点对应的外框轮廓特征点,其中所述第二内轮廓特征点和所述第四内轮廓特征点是与所述第三内轮廓特征点相邻的两个内轮廓特征点;将所述交点作为与所述第三内轮廓特征点对应的第三外轮廓特征点。In this step, an outer contour feature point is generated according to the inner contour feature point generated in step S102. In one embodiment, the inner contour feature point is generated in a direction away from the target object according to the inner contour feature point For outer contour feature points, the generating process may be an interpolation process. Taking the target object as a human body for example, the inner contour feature points are located on the contour line of the human body, and for each inner contour feature point, an outer contour feature point corresponding to it is generated outside the human body. In one embodiment, the first inner contour feature point and the second inner contour feature point are perpendicular to a line segment connected by the first inner contour feature point and the second inner contour feature point, and the first inner contour feature point and the second inner contour feature point The contour feature points are adjacent inner contour feature points; the first point is taken in the direction of the vertical line away from the target object, and the length of the line segment between the first point and the first inner contour feature point is a predetermined length , Using the first point as the outer contour feature point corresponding to the first inner contour feature point. After that, the above two steps may be repeated until all the first inner contour points generate corresponding outer contour feature points. Generate two different outer contour feature points via the third inner contour feature point, which are the first auxiliary outer contour feature point and the second auxiliary outer contour feature point, where the third inner contour point is the inner contour feature point at the contour inflection point ; Calculate the intersection of the line where the first auxiliary outer contour feature point and the second outer contour feature point are located and the line where the second auxiliary outer contour feature point and the fourth outer contour feature point are located, where the second outer contour feature point is An outer contour feature point corresponding to a second inner contour feature point, the fourth outer contour feature point is an outer frame contour feature point corresponding to a fourth inner contour feature point, wherein the second inner contour feature point and the The fourth inner contour feature point is two inner contour feature points adjacent to the third inner contour feature point; the intersection point is used as the third outer contour feature point corresponding to the third inner contour feature point.
参见图2所示,为上述实施例的一个具体实例,如图2所示,内轮廓特征点包括点1、2、3、4、5、6,其中点1、2、3、5、6对应于上述实施例中的第一内轮廓点和第二内轮廓点,点4为目标对象的轮廓拐点,对应于上述实施例中的第三内轮廓点。以点2为例,经过点2向线段12做垂线,并在该垂线远离目标对象的方向上取一点b,并使线段1b的长度为H,其中H为预设的长度,对于每个内轮廓特征点,都有两个于之相邻的内轮廓特征点,对于点2来说,除了点1与之相邻之外,还有点3与之相邻,因此经过点2再做一条垂线,使之与线段23垂直,在该实施例中,点1、点2和点3位于同一条直线上,因此经过点2做的线段12的垂线和线段23的垂线重合,此时得到的两个点b重合,可以确定点b为与点2对应的外轮廓特征点,由此对每个点重复上述操作可以得到与内轮廓特征点1、2、3、5、6对应的外轮廓特征点a、b、c、f、g;在该具体实例中,还包括一种特殊情况,如图2中所述的点4,为一个拐点,其位于目标对象轮廓的拐弯处,此时由于点4和其相邻点3、5,三个点不在一条直线上,因此经过点4分别做线段34和线段45的垂线,两条垂线不重合,此时会出现两个点d和e,此时,将线段cd和线段fe延长,延长线相交于h点,将h点确定为与内轮廓特征点4对应的外轮廓特征点。Refer to FIG. 2 for a specific example of the above embodiment. As shown in FIG. 2, the inner contour feature points include points 1, 2, 3, 4, 5, and 6, of which points 1, 2, 3, 5, 6 Corresponding to the first inner contour point and the second inner contour point in the above embodiment, point 4 is the contour inflection point of the target object, which corresponds to the third inner contour point in the above embodiment. Take point 2 as an example, make a vertical line to line segment 12 through point 2, and take a point b in the direction of the vertical line away from the target object, and make the length of line segment 1b H, where H is the preset length, for each Each inner contour feature point has two inner contour feature points adjacent to it. For point 2, in addition to point 1 being adjacent to it, there is also point 3 adjacent to it, so do it after point 2 A vertical line, making it perpendicular to the line segment 23, in this embodiment, point 1, point 2 and point 3 are located on the same straight line, so the vertical line of line segment 12 passing through point 2 coincides with the vertical line of line segment 23, The two points b obtained at this time coincide, and it can be determined that the point b is the outer contour feature point corresponding to the point 2, thus repeating the above operation for each point can obtain the inner contour feature points 1, 2, 3, 5, 6 Corresponding outer contour feature points a, b, c, f, g; in this specific example, it also includes a special case, as shown in Figure 2, point 4 is an inflection point, which is located in the corner of the target object contour At this time, since point 4 and its adjacent points 3 and 5, the three points are not on a straight line, so through point 4 to do line segment 34 and line segment 45 respectively Vertical line, the two vertical lines do not coincide, at this time two points d and e will appear, at this time, the line segment cd and line segment fe are extended, the extension line intersects at point h, and the point h is determined as the inner contour feature point 4 Corresponding outer contour feature points.
需要说明的是,上述外轮廓特征点的生成方法仅为举例,不构成对本公开的限制,实际上任何根据内轮廓点生成外轮廓点的方法均可以用于本公开中。It should be noted that the above method for generating outer contour feature points is only an example and does not constitute a limitation to the present disclosure. In fact, any method for generating outer contour points based on inner contour points can be used in the present disclosure.
步骤S104:将预设的素材填充到所述内轮廓特征点和所述外轮廓特征点之间的区域中。Step S104: Fill the preset material into the area between the inner contour feature point and the outer contour feature point.
在本公开中,所述预设的素材可以是具有固定大小的色卡,在该步骤中,将所述素材填充到所述内轮廓特征点和所述外轮廓特征点之间的区域中,以形成对目标对象描边的效果。在一个实施例中,所述填充的过程为:所述素材填充到两个相邻的内轮廓特征点和与这两个相邻的内轮廓特征点对应的两个相邻的外轮廓特征点所组成的区域中;重复上述填充操作,直到所有的所述内轮廓特征点和所述外轮廓特征点之间的区域均被所述素材填充。如图3所示为素材填充的一个实例,在该实例中,所述素材的长为L,宽度为H,也就是说内轮廓特征点之间的距离为L,内轮廓特征点与其对应的外轮廓特征点之间的距离为H,区域1ab2为一个素材的大小,预设的素材正好填充其中,如图3中区域1ab2阴影部分所示。在一个实施例中,预先获取预设素材的长宽属性,在步骤S102中生成内轮廓特征点时,在目标对象的轮廓上,以预设素材的长为采样距离,采样出内轮廓特征点,在步骤S103 中,以预设素材的宽度取内轮廓特征点的外轮廓特征点。在另外一个实施例中,所述内轮廓特征点之间的距离不为L,其可以为L的n倍或者1/n倍,此时做素材填充时,可以对素材进行拉伸扩展或者截断之后在进行填充。In the present disclosure, the preset material may be a color card with a fixed size, and in this step, the material is filled into the area between the inner contour feature point and the outer contour feature point, To form a stroke on the target object. In one embodiment, the filling process is: the material is filled into two adjacent inner contour feature points and two adjacent outer contour feature points corresponding to the two adjacent inner contour feature points In the composed area; repeat the above filling operation until all the areas between the inner contour feature point and the outer contour feature point are filled with the material. Figure 3 shows an example of material filling. In this example, the length of the material is L, and the width is H, that is, the distance between the inner contour feature points is L, and the inner contour feature points and their corresponding The distance between the outer contour feature points is H, and the area 1ab2 is the size of a material, and the preset material just fills it, as shown in the shaded area of the area 1ab2 in FIG. 3. In one embodiment, the length and width attributes of the preset material are obtained in advance, and when the inner contour feature point is generated in step S102, the inner contour feature point is sampled on the contour of the target object using the length of the preset material as the sampling distance In step S103, the outer contour feature points of the inner contour feature points are taken with the width of the preset material. In another embodiment, the distance between the inner contour feature points is not L, which may be n times or 1 / n times of L. At this time, when the material is filled, the material may be stretched or expanded. After filling.
在一个实施例中,在对目标图像进行分割以得到目标对象的轮廓之前,还包括:设置预设素材与目标图像之间的对应关系。在该实施例中,可以预先准备多个素材,与多个目标图像对应,所述多个目标对象可以是图片也可以是视频的视频帧,当目标图像为视频的视频帧时,对多个视频帧设置对应的素材,对每一帧视频生成不同的描边效果,当视频播放时,描边效果会随着视频的播放而变化。In one embodiment, before segmenting the target image to obtain the outline of the target object, the method further includes: setting a correspondence between the preset material and the target image. In this embodiment, multiple materials may be prepared in advance, corresponding to multiple target images. The multiple target objects may be pictures or video frames of video. When the target image is a video frame of video, multiple The video frame sets the corresponding material to generate different stroke effects for each frame of video. When the video plays, the stroke effect will change as the video plays.
在一个实施例中,在对目标图像进行分割以得到目标对象的轮廓之前,还包括:选择目标对象。其中所述目标对象可以是任何可以从目标图像中分割出来的对象,在该实施例中目标对象可以是人体、各种动物如猫狗、植物、建筑物等。当选择了不同的目标对象之后,调用不同的对象分割算法,用户可灵活的调整需要分割的对象。In one embodiment, before segmenting the target image to obtain the outline of the target object, the method further includes: selecting the target object. The target object may be any object that can be segmented from the target image. In this embodiment, the target object may be a human body, various animals such as cats, dogs, plants, buildings, and the like. When different target objects are selected, different object segmentation algorithms are called, and users can flexibly adjust the objects that need to be segmented.
在一个实施例中,在对目标图像进行分割以得到目标对象的轮廓之前,还包括:选择需要分割的目标对象的序号。在一个实施例中,目标图像中可能存在多个目标对象,如视频帧中存在多个人体,此时可以预先设置需要进行处理的目标对象的序号,如设置序号为1,则对第一个分割出来的人体进行本公开中的图像处理,如设置需要为0,则对所有分割出来的人体进行本公开中的图像处理。In one embodiment, before segmenting the target image to obtain the outline of the target object, it further includes: selecting the serial number of the target object to be segmented. In one embodiment, there may be multiple target objects in the target image, such as multiple human bodies in the video frame, at this time, the serial number of the target object that needs to be processed can be set in advance, if the serial number is set to 1, the first The segmented human body performs image processing in the present disclosure. If the setting needs to be 0, then the image processing in the present disclosure is performed on all segmented human bodies.
在一个实施例中,可以设置轮廓的显示属性,如可以设置轮廓某一段在某一帧或者某些帧中不显示或者某一段轮廓随机显示等等,这样会出现素材闪烁出现的效果。In one embodiment, the display properties of the outline can be set, for example, a certain section of the outline can be set not to be displayed in a certain frame or some frames or a certain section of the outline is displayed randomly, etc., so that the effect of the material flickering appears.
如图4所示,为本公开所公开的图像处理方法对图像进行处理后的效果实例,在该实例中,目标对象为人体,对人体进行描边处理,以在图像中突出显示人体的位置。As shown in FIG. 4, it is an example of the effect of the image processing method disclosed in this disclosure after processing an image. In this example, the target object is a human body, and the human body is stroked to highlight the position of the human body in the image .
本公开公开一种图像处理方法、装置、硬件装置。其中,该图像处理方法包括:对目标图像进行分割以得到目标对象的轮廓;根据所述目标对象的轮廓生成所述目标对象的内轮廓特征点;根据所述内轮廓特征点生成外轮廓特征点;将预设的素材填充到所述内轮廓特征点和所述外轮廓特征点之间的区域中。本公开实施例的图像处理方法,可以从图像中分割出需要处理的目标对象,并在目标对象的相关区域中添加素材以形成特效,修改特效时,只需要修改素材,无需对图像再次重新编辑,提高了特效制作的效率和灵活 性。The present disclosure discloses an image processing method, device, and hardware device. Wherein, the image processing method includes: segmenting the target image to obtain the contour of the target object; generating an inner contour feature point of the target object according to the contour of the target object; generating an outer contour feature point according to the inner contour feature point Filling the preset material into the area between the inner contour feature point and the outer contour feature point; In the image processing method of the embodiment of the present disclosure, the target object to be processed can be segmented from the image, and the material can be added to the relevant area of the target object to form a special effect. When modifying the special effect, only the material needs to be modified without re-editing the image , Improve the efficiency and flexibility of special effects production.
在上文中,虽然按照上述的顺序描述了上述方法实施例中的各个步骤,本领域技术人员应清楚,本公开实施例中的步骤并不必然按照上述顺序执行,其也可以倒序、并行、交叉等其他顺序执行,而且,在上述步骤的基础上,本领域技术人员也可以再加入其他步骤,这些明显变型或等同替换的方式也应包含在本公开的保护范围之内,在此不再赘述。In the above, although the steps in the above method embodiments are described in the above order, those skilled in the art should understand that the steps in the embodiments of the present disclosure are not necessarily performed in the above order, and they can also be reversed, parallel, and cross Other procedures are executed in sequence, and on the basis of the above steps, those skilled in the art can also add other steps. These obvious modifications or equivalent replacements should also be included in the scope of protection of the present disclosure, which will not be repeated here. .
下面为本公开装置实施例,本公开装置实施例可用于执行本公开方法实施例实现的步骤,为了便于说明,仅示出了与本公开实施例相关的部分,具体技术细节未揭示的,请参照本公开方法实施例。The following is a device embodiment of the present disclosure. The device embodiment of the present disclosure can be used to perform the steps implemented by the method embodiments of the present disclosure. For ease of description, only parts related to the embodiment of the present disclosure are shown. Specific technical details are not disclosed. Please Refer to the method embodiment of the present disclosure.
本公开实施例提供一种图像处理装置。该装置可以执行上述图像处理方法实施例中所述的步骤。如图5所示,该装置500主要包括:轮廓获取模块501、内轮廓特征点生成模块502、外轮廓特征点生成模块503和填充模块504。其中,An embodiment of the present disclosure provides an image processing device. The device may perform the steps described in the above embodiments of the image processing method. As shown in FIG. 5, the device 500 mainly includes a contour acquisition module 501, an inner contour feature point generation module 502, an outer contour feature point generation module 503 and a filling module 504. among them,
轮廓获取模块501,用于对目标图像进行分割以得到目标对象的轮廓;The contour acquisition module 501 is used to segment the target image to obtain the contour of the target object;
内轮廓特征点生成模块502,用于根据所述目标对象的轮廓生成所述目标对象的内轮廓特征点;An inner contour feature point generating module 502, configured to generate an inner contour feature point of the target object according to the contour of the target object;
外轮廓特征点生成模块503,用于根据所述内轮廓特征点生成外轮廓特征点;An outer contour feature point generating module 503, configured to generate an outer contour feature point according to the inner contour feature point;
填充模块504,用于将预设的素材填充到所述内轮廓特征点和所述外轮廓特征点之间的区域中。The filling module 504 is used to fill the preset material into the area between the inner contour feature point and the outer contour feature point.
进一步的,所述内轮廓特征点生成模块502,用于沿所述目标对象的轮廓线生成所述内轮廓特征点,其中相邻的内轮廓特征点之间的距离相同。Further, the inner contour feature point generating module 502 is configured to generate the inner contour feature point along the contour line of the target object, wherein the distance between adjacent inner contour feature points is the same.
进一步的,所述相邻内轮廓特征点之间的距离为所述素材的长度。Further, the distance between the adjacent inner contour feature points is the length of the material.
进一步的,外轮廓特征点生成模块503包括外轮廓特征点生成子模块,用于根据所述内轮廓特征点在所述内轮廓特征点的远离目标对象的方向上生成外轮廓特征点。Further, the outer contour feature point generation module 503 includes an outer contour feature point generation sub-module for generating an outer contour feature point in a direction of the inner contour feature point away from the target object according to the inner contour feature point.
进一步的,所述外轮廓特征点生成子模块,经第一内轮廓特征点向由第一内轮廓特征点和第二内轮廓特征点连接成的线段做垂线,所述第一内轮廓特征点和所述第二内轮廓特征点为相邻内轮廓特征点;在所述垂线的远离目标对象的方向上取第一点,所述第一点与所述第一内轮廓特征点之间的线段长度为预定长度,将所述第一点作为所述第一内轮廓特征点所对应 的外轮廓特征点。Further, the outer contour feature point generating sub-module makes a vertical line through the first inner contour feature point to a line segment connected by the first inner contour feature point and the second inner contour feature point, and the first inner contour feature The point and the second inner contour feature point are adjacent inner contour feature points; the first point is taken in the direction of the vertical line away from the target object, the first point and the first inner contour feature point The length of the line segment between is a predetermined length, and the first point is used as the outer contour feature point corresponding to the first inner contour feature point.
进一步的,所述外轮廓特征点生成子模块,还用于:经第三内轮廓特征点生成两个不同的外轮廓特征点,分别为第一辅助外轮廓特征点和第二辅助外轮廓特征点,其中第三内轮廓点为轮廓拐点处的内轮廓特征点;计算第一辅助外轮廓特征点和第二外轮廓特征点所在的直线与第二辅助外轮廓特征点和第四外轮廓特征点所在的直线的交点,其中所述第二外轮廓特征点为与第二内轮廓特征点对应的外轮廓特征点,所述第四外轮廓特征点为与第四内轮廓特征点点对应的外框轮廓特征点,其中所述第二内轮廓特征点和所述第四内轮廓特征点是与所述第三内轮廓特征点相邻的两个内轮廓特征点;将所述交点作为与所述第三内轮廓特征点对应的第三外轮廓特征点。Further, the outer contour feature point generation sub-module is also used to generate two different outer contour feature points via the third inner contour feature point, which are the first auxiliary outer contour feature point and the second auxiliary outer contour feature, respectively Point, where the third inner contour point is the inner contour feature point at the inflection point of the contour; calculate the line where the first auxiliary outer contour feature point and the second outer contour feature point are located, and the second auxiliary outer contour feature point and the fourth outer contour feature The intersection point of the straight line where the point is located, wherein the second outer contour feature point is the outer contour feature point corresponding to the second inner contour feature point, and the fourth outer contour feature point is the outer contour feature point corresponding to the fourth inner contour feature point A frame outline feature point, wherein the second inner outline feature point and the fourth inner outline feature point are two inner outline feature points adjacent to the third inner outline feature point; The third outer contour feature point corresponding to the third inner contour feature point.
进一步的,所述内轮廓特征点和与所述内轮廓特征点对应的外轮廓特征点之间的距离为所述素材的宽度。Further, the distance between the inner contour feature point and the outer contour feature point corresponding to the inner contour feature point is the width of the material.
进一步的,所述填充模块504,用于将所述素材填充到两个相邻的内轮廓特征点和与这两个相邻的内轮廓特征点对应的两个相邻的外轮廓特征点所组成的区域中的填充操作;重复上述填充操作,直到所有的所述内轮廓特征点和所述外轮廓特征点之间的区域均被所述素材填充。Further, the filling module 504 is configured to fill the material to two adjacent inner contour feature points and two adjacent outer contour feature points corresponding to the two adjacent inner contour feature points Filling operation in the composed area; repeating the above filling operation until all areas between the inner contour feature point and the outer contour feature point are filled with the material.
进一步的,所述轮廓获取模块501,用于获取视频;对视频中的视频帧图像进行分割;将所述视频帧图像中的目标对象与其他对象分离以得到目标对象的轮廓。Further, the contour acquisition module 501 is used to acquire a video; segment the video frame image in the video; and separate the target object in the video frame image from other objects to obtain the contour of the target object.
进一步的,所述图像处理装置500,还包括对应关系设置模块,用于设置预设素材与目标图像之间的对应关系。Further, the image processing apparatus 500 further includes a correspondence relationship setting module, which is used to set a correspondence relationship between the preset material and the target image.
图5所示装置可以执行图1所示实施例的方法,本实施例未详细描述的部分,可参考对图1所示实施例的相关说明。该技术方案的执行过程和技术效果参见图1所示实施例中的描述,在此不再赘述。The device shown in FIG. 5 can execute the method of the embodiment shown in FIG. For the execution process and technical effect of the technical solution, refer to the description in the embodiment shown in FIG. 1, and details are not described herein again.
下面参考图6,其示出了适于用来实现本公开实施例的电子设备600的结构示意图。本公开实施例中的电子设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图6示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。Reference is made to FIG. 6 below, which shows a schematic structural diagram of an electronic device 600 suitable for implementing embodiments of the present disclosure. Electronic devices in the embodiments of the present disclosure may include, but are not limited to, such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g. Mobile terminals such as car navigation terminals) and fixed terminals such as digital TVs, desktop computers and the like. The electronic device shown in FIG. 6 is only an example, and should not bring any limitation to the functions and use scope of the embodiments of the present disclosure.
如图6所示,电子设备600可以包括处理装置(例如中央处理器、图形处理器等)601,其可以根据存储在只读存储器(ROM)602中的程序或者 从存储装置608加载到随机访问存储器(RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有电子设备600操作所需的各种程序和数据。处理装置601、ROM 602以及RAM 603通过总线604彼此相连。输入/输出(I/O)接口605也连接至总线604。As shown in FIG. 6, the electronic device 600 may include a processing device (such as a central processing unit, a graphics processor, etc.) 601, which may be loaded into random access according to a program stored in a read only memory (ROM) 602 or from the storage device 608 The program in the memory (RAM) 603 performs various appropriate operations and processes. In the RAM 603, various programs and data necessary for the operation of the electronic device 600 are also stored. The processing device 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input / output (I / O) interface 605 is also connected to the bus 604.
通常,以下装置可以连接至I/O接口605:包括例如触摸屏、触摸板、键盘、鼠标、图像传感器、麦克风、加速度计、陀螺仪等的输入装置606;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置607;包括例如磁带、硬盘等的存储装置608;以及通信装置609。通信装置609可以允许电子设备600与其他设备进行无线或有线通信以交换数据。虽然图4示出了具有各种装置的电子设备600,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。Generally, the following devices can be connected to the I / O interface 605: include input devices 606 including, for example, touch screens, touch pads, keyboards, mice, image sensors, microphones, accelerometers, gyroscopes, etc .; An output device 607 such as a vibrator; a storage device 608 including, for example, a magnetic tape, a hard disk, etc .; and a communication device 609. The communication device 609 may allow the electronic device 600 to perform wireless or wired communication with other devices to exchange data. Although FIG. 4 shows an electronic device 600 having various devices, it should be understood that it is not required to implement or have all the devices shown. More or fewer devices may be implemented or provided instead.
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置609从网络上被下载和安装,或者从存储装置608被安装,或者从ROM 602被安装。在该计算机程序被处理装置601执行时,执行本公开实施例的方法中限定的上述功能。In particular, according to an embodiment of the present disclosure, the process described above with reference to the flowchart may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product that includes a computer program carried on a computer-readable medium, the computer program containing program code for performing the method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from the network through the communication device 609, or from the storage device 608, or from the ROM 602. When the computer program is executed by the processing device 601, the above-mentioned functions defined in the method of the embodiments of the present disclosure are executed.
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用 任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。It should be noted that the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two. The computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer diskettes, hard drives, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device. In this disclosure, the computer-readable signal medium may include a data signal that is propagated in baseband or as part of a carrier wave, in which computer-readable program code is carried. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above. The computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, and the computer-readable signal medium may send, propagate, or transmit a program for use by or in combination with an instruction execution system, apparatus, or device . The program code contained on the computer-readable medium may be transmitted using any appropriate medium, including but not limited to: electric wire, optical cable, RF (radio frequency), etc., or any suitable combination of the foregoing.
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。The computer-readable medium may be included in the electronic device; or it may exist alone without being assembled into the electronic device.
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:对目标图像进行分割以得到目标对象的轮廓;根据所述目标对象的轮廓生成所述目标对象的内轮廓特征点;根据所述内轮廓特征点生成外轮廓特征点;将预设的素材填充到所述内轮廓特征点和所述外轮廓特征点之间的区域中。The computer-readable medium carries one or more programs, and when the one or more programs are executed by the electronic device, the electronic device is caused to: segment the target image to obtain the outline of the target object; according to the target object The contour generates an inner contour feature point of the target object; generates an outer contour feature point according to the inner contour feature point; fills a preset material into the area between the inner contour feature point and the outer contour feature point .
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。The computer program code for performing the operations of the present disclosure can be written in one or more programming languages or a combination thereof. The above programming languages include object-oriented programming languages such as Java, Smalltalk, C ++, and also include conventional Procedural programming language-such as "C" language or similar programming language. The program code may execute entirely on the user's computer, partly on the user's computer, as an independent software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In situations involving remote computers, the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, through an Internet service provider Internet connection).
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagrams in the drawings illustrate the possible implementation architecture, functions, and operations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagram may represent a module, program segment, or part of code that contains one or more logic functions Executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession can actually be executed in parallel, and sometimes they can also be executed in reverse order, depending on the functions involved. It should also be noted that each block in the block diagrams and / or flowcharts, and combinations of blocks in the block diagrams and / or flowcharts, can be implemented with dedicated hardware-based systems that perform specified functions or operations Or, it can be realized by a combination of dedicated hardware and computer instructions.
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定。The units described in the embodiments of the present disclosure may be implemented in software or hardware. Among them, the name of the unit does not constitute a limitation on the unit itself under certain circumstances.
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特 征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。The above description is only the preferred embodiment of the present disclosure and the explanation of the applied technical principles. Those skilled in the art should understand that the scope of the disclosure in this disclosure is not limited to the technical solutions formed by the specific combination of the above technical features, but should also cover the above technical features or without departing from the above disclosed concepts. Other technical solutions formed by arbitrary combinations of equivalent features. For example, the above features and the technical features disclosed in this disclosure (but not limited to) having similar functions are replaced with each other to form a technical solution.

Claims (13)

  1. 一种图像处理方法,包括:An image processing method, including:
    对目标图像进行分割以得到目标对象的轮廓;Segment the target image to get the outline of the target object;
    根据所述目标对象的轮廓生成所述目标对象的内轮廓特征点;Generating an inner contour feature point of the target object according to the contour of the target object;
    根据所述内轮廓特征点生成外轮廓特征点;Generating an outer contour feature point according to the inner contour feature point;
    将预设的素材填充到所述内轮廓特征点和所述外轮廓特征点之间的区域中。Fill the preset material into the area between the inner contour feature point and the outer contour feature point.
  2. 如权利要求1所述的图像处理方法,其中根据所述目标对象的轮廓生成目标对象的内轮廓特征点包括:The image processing method according to claim 1, wherein generating the inner contour feature points of the target object according to the contour of the target object comprises:
    沿所述目标对象的轮廓线生成所述内轮廓特征点。The inner contour feature point is generated along the contour line of the target object.
  3. 如权利要求2所述的图像处理方法,其中:相邻的内轮廓特征点之间的距离相同。The image processing method according to claim 2, wherein the distances between adjacent inner contour feature points are the same.
  4. 如权利要求1所述的图像处理方法,其中根据所述内轮廓特征点生成外轮廓特征点包括:The image processing method according to claim 1, wherein generating an outer contour feature point from the inner contour feature point includes:
    根据所述内轮廓特征点在所述内轮廓特征点的远离目标对象的方向上生成外轮廓特征点。An outer contour feature point is generated in the direction of the inner contour feature point away from the target object according to the inner contour feature point.
  5. 如权利要求4所述的图像处理方法,其中所述根据所述内轮廓特征点在所述内轮廓特征点的远离目标对象的方向上生成外轮廓特征点,包括:The image processing method according to claim 4, wherein the generating an outer contour feature point in a direction of the inner contour feature point away from the target object according to the inner contour feature point includes:
    经第一内轮廓特征点向由第一内轮廓特征点和第二内轮廓特征点连接成的线段做垂线,所述第一内轮廓特征点和所述第二内轮廓特征点为相邻内轮廓特征点;The first inner contour feature point is perpendicular to a line segment connected by the first inner contour feature point and the second inner contour feature point, the first inner contour feature point and the second inner contour feature point are adjacent Inner contour feature points;
    在所述垂线的远离目标对象的方向上取第一点,所述第一点与所述第一内轮廓特征点之间的线段长度为预定长度,将所述第一点作为所述第一内轮廓特征点所对应的外轮廓特征点。Take a first point in the direction of the vertical line away from the target object, the length of the line segment between the first point and the first inner contour feature point is a predetermined length, and use the first point as the first An outer contour feature point corresponding to an inner contour feature point.
  6. 如权利要求5所述的图像处理方法,其中所述根据所述内轮廓特征点在所述内轮廓特征点的远离目标对象的方向上生成外轮廓特征点,还包括:The image processing method according to claim 5, wherein the generating an outer contour feature point in the direction of the inner contour feature point away from the target object according to the inner contour feature point further comprises:
    经第三内轮廓特征点生成两个不同的外轮廓特征点,分别为第一辅助外轮廓特征点和第二辅助外轮廓特征点,其中第三内轮廓点为目标对象的轮廓拐点处的内轮廓特征点;Generate two different outer contour feature points via the third inner contour feature point, which are the first auxiliary outer contour feature point and the second auxiliary outer contour feature point, where the third inner contour point is the inner point at the contour inflection point of the target object Contour feature points;
    计算第一辅助外轮廓特征点和第二外轮廓特征点所在的直线与第二辅助外轮廓特征点和第四外轮廓特征点所在的直线的交点,其中所述第二外轮廓特征点为与第二内轮廓特征点对应的外轮廓特征点,所述第四外轮廓特征点为与第四内轮廓特征点点对应的外框轮廓特征点,其中所述第二内轮廓特征点和所述第四内轮廓特征点是与所述第三内轮廓特征点相邻的两个内轮廓特征点;Calculate the intersection point of the straight line where the first auxiliary outer contour feature point and the second outer outline feature point are located and the straight line where the second auxiliary outer contour feature point and the fourth outer outline feature point are located, wherein An outer contour feature point corresponding to the second inner contour feature point, the fourth outer contour feature point is an outer frame contour feature point corresponding to the fourth inner contour feature point, wherein the second inner contour feature point and the first Four inner contour feature points are two inner contour feature points adjacent to the third inner contour feature point;
    将所述交点作为与所述第三内轮廓特征点对应的第三外轮廓特征点。The intersection point is used as a third outer contour feature point corresponding to the third inner contour feature point.
  7. 如权利要求1所述的图像处理方法,其中:所述内轮廓特征点和与所述内轮廓特征点对应的外轮廓特征点之间的距离为所述素材的宽度。The image processing method according to claim 1, wherein the distance between the inner contour feature point and the outer contour feature point corresponding to the inner contour feature point is the width of the material.
  8. 如权利要求1所述的图像处理方法,其中所述将预设的素材填充到所述内轮廓特征点和所述外轮廓特征点之间的区域中,包括:The image processing method according to claim 1, wherein the filling of the preset material into the area between the inner contour feature point and the outer contour feature point includes:
    将所述素材填充到两个相邻的内轮廓特征点和与这两个相邻的内轮廓特征点对应的两个相邻的外轮廓特征点所组成的区域中的填充操作;A filling operation of filling the material into an area composed of two adjacent inner contour feature points and two adjacent outer contour feature points corresponding to the two adjacent inner contour feature points;
    重复上述填充操作,直到所有的所述内轮廓特征点和所述外轮廓特征点之间的区域均被所述素材填充。The above filling operation is repeated until all areas between the inner contour feature point and the outer contour feature point are filled with the material.
  9. 如权利要求1所述的图像处理方法,其中所述对目标图像进行分割以得到目标对象的轮廓,包括:The image processing method according to claim 1, wherein the segmentation of the target image to obtain the outline of the target object includes:
    获取视频;Get video
    对所述视频中的视频帧图像进行分割;Segment the video frame image in the video;
    将所述视频帧图像中的目标对象与其他对象分离以得到目标对象的轮廓。The target object in the video frame image is separated from other objects to obtain the outline of the target object.
  10. 如权利要求1所述的图像处理方法,其特征在于,在所述对目标图像进行分割以得到目标对象的轮廓之前,还包括:The image processing method according to claim 1, wherein before the segmenting the target image to obtain the outline of the target object, further comprising:
    设置预设素材与目标图像之间的对应关系。Set the correspondence between the preset material and the target image.
  11. 一种图像处理装置,包括:An image processing device, including:
    轮廓获取模块,用于对目标图像进行分割以得到目标对象的轮廓;The contour acquisition module is used to segment the target image to obtain the contour of the target object;
    内轮廓特征点生成模块,用于根据所述目标对象的轮廓生成所述目标对象的内轮廓特征点;An inner contour feature point generating module, configured to generate an inner contour feature point of the target object according to the contour of the target object;
    外轮廓特征点生成模块,用于根据所述内轮廓特征点生成外轮廓特征点;An outer contour feature point generating module, configured to generate an outer contour feature point according to the inner contour feature point;
    填充模块,用于将预设的素材填充到所述内轮廓特征点和所述外轮廓特征点之间的区域中。The filling module is used to fill the preset material into the area between the inner contour feature point and the outer contour feature point.
  12. 一种电子设备,包括:An electronic device, including:
    存储器,用于存储非暂时性计算机可读指令;以及Memory for storing non-transitory computer-readable instructions; and
    处理器,用于运行所述计算机可读指令,使得所述处理器执行时实现根据权利要求1-10中任意一项所述的图像处理方法。A processor, configured to execute the computer-readable instructions, so that the processor executes the image processing method according to any one of claims 1-10 when executed.
  13. 一种计算机可读存储介质,用于存储非暂时性计算机可读指令,当所述非暂时性计算机可读指令由计算机执行时,使得所述计算机执行权利要求1-10中任意一项所述的图像处理方法。A computer-readable storage medium for storing non-transitory computer-readable instructions, when the non-transitory computer-readable instructions are executed by a computer, causing the computer to execute any one of claims 1-10 Image processing method.
PCT/CN2019/073082 2018-10-19 2019-01-25 Image processing method, device, and hardware device WO2020077912A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811222643.5 2018-10-19
CN201811222643.5A CN110070554A (en) 2018-10-19 2018-10-19 Image processing method, device, hardware device

Publications (1)

Publication Number Publication Date
WO2020077912A1 true WO2020077912A1 (en) 2020-04-23

Family

ID=67365892

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/073082 WO2020077912A1 (en) 2018-10-19 2019-01-25 Image processing method, device, and hardware device

Country Status (2)

Country Link
CN (1) CN110070554A (en)
WO (1) WO2020077912A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308769B (en) * 2020-10-30 2022-06-10 北京字跳网络技术有限公司 Image synthesis method, apparatus and storage medium
CN112581620A (en) * 2020-11-30 2021-03-30 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN114125320B (en) * 2021-08-31 2023-05-09 北京达佳互联信息技术有限公司 Method and device for generating special effects of image

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5751852A (en) * 1996-04-29 1998-05-12 Xerox Corporation Image structure map data structure for spatially indexing an imgage
US6300955B1 (en) * 1997-09-03 2001-10-09 Mgi Software Corporation Method and system for mask generation
CN101950427A (en) * 2010-09-08 2011-01-19 东莞电子科技大学电子信息工程研究院 Vector line segment contouring method applicable to mobile terminal
CN104520901A (en) * 2012-08-09 2015-04-15 高通股份有限公司 Gpu-accelerated rendering of paths with a dash pattern
CN105513006A (en) * 2014-10-16 2016-04-20 北京汉仪科印信息技术有限公司 Outline thickness adjusting method and device of TrueType font
CN108399654A (en) * 2018-02-06 2018-08-14 北京市商汤科技开发有限公司 It retouches in the generation of special efficacy program file packet and special efficacy generation method and device when retouching

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5751852A (en) * 1996-04-29 1998-05-12 Xerox Corporation Image structure map data structure for spatially indexing an imgage
US6300955B1 (en) * 1997-09-03 2001-10-09 Mgi Software Corporation Method and system for mask generation
CN101950427A (en) * 2010-09-08 2011-01-19 东莞电子科技大学电子信息工程研究院 Vector line segment contouring method applicable to mobile terminal
CN104520901A (en) * 2012-08-09 2015-04-15 高通股份有限公司 Gpu-accelerated rendering of paths with a dash pattern
CN105513006A (en) * 2014-10-16 2016-04-20 北京汉仪科印信息技术有限公司 Outline thickness adjusting method and device of TrueType font
CN108399654A (en) * 2018-02-06 2018-08-14 北京市商汤科技开发有限公司 It retouches in the generation of special efficacy program file packet and special efficacy generation method and device when retouching

Also Published As

Publication number Publication date
CN110070554A (en) 2019-07-30

Similar Documents

Publication Publication Date Title
WO2020186935A1 (en) Virtual object displaying method and device, electronic apparatus, and computer-readable storage medium
WO2020077913A1 (en) Image processing method and device, and hardware device
JP7199527B2 (en) Image processing method, device, hardware device
CN110189246B (en) Image stylization generation method and device and electronic equipment
CN110070496B (en) Method and device for generating image special effect and hardware device
JP2024505995A (en) Special effects exhibition methods, devices, equipment and media
US11776209B2 (en) Image processing method and apparatus, electronic device, and storage medium
WO2020077912A1 (en) Image processing method, device, and hardware device
WO2020192195A1 (en) Image processing method and apparatus, and electronic device
CN110047121B (en) End-to-end animation generation method and device and electronic equipment
CN110378947B (en) 3D model reconstruction method and device and electronic equipment
WO2021098361A1 (en) Topographic map editing method, device, electronic apparatus, and computer readable medium
CN110035271B (en) Fidelity image generation method and device and electronic equipment
CN109754464B (en) Method and apparatus for generating information
CN113806306B (en) Media file processing method, device, equipment, readable storage medium and product
CN112714263B (en) Video generation method, device, equipment and storage medium
CN110378948B (en) 3D model reconstruction method and device and electronic equipment
CN113205601A (en) Roaming path generation method and device, storage medium and electronic equipment
CN110069641B (en) Image processing method and device and electronic equipment
CN114422698B (en) Video generation method, device, equipment and storage medium
CN111292247A (en) Image processing method and device
CN111275799B (en) Animation generation method and device and electronic equipment
CN110390717B (en) 3D model reconstruction method and device and electronic equipment
CN114049403A (en) Multi-angle three-dimensional face reconstruction method and device and storage medium
CN110363860B (en) 3D model reconstruction method and device and electronic equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19872790

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 06.08.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19872790

Country of ref document: EP

Kind code of ref document: A1