CN109064390A - A kind of image processing method, image processing apparatus and mobile terminal - Google Patents

A kind of image processing method, image processing apparatus and mobile terminal Download PDF

Info

Publication number
CN109064390A
CN109064390A CN201810864000.4A CN201810864000A CN109064390A CN 109064390 A CN109064390 A CN 109064390A CN 201810864000 A CN201810864000 A CN 201810864000A CN 109064390 A CN109064390 A CN 109064390A
Authority
CN
China
Prior art keywords
image
textures
target
processed
layers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810864000.4A
Other languages
Chinese (zh)
Other versions
CN109064390B (en
Inventor
郭雄伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oppo Chongqing Intelligent Technology Co Ltd
Original Assignee
Oppo Chongqing Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo Chongqing Intelligent Technology Co Ltd filed Critical Oppo Chongqing Intelligent Technology Co Ltd
Priority to CN201810864000.4A priority Critical patent/CN109064390B/en
Publication of CN109064390A publication Critical patent/CN109064390A/en
Application granted granted Critical
Publication of CN109064390B publication Critical patent/CN109064390B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • G06T5/94

Abstract

This application provides a kind of image processing method, image processing apparatus and mobile terminals, the described method includes: obtaining image to be processed and target textures, the target textures are made of multiple figure layers, wherein each pixel in each figure layer is corresponding with transparence information;It whether detects in the image to be processed comprising target object;If in the image to be processed including target object, then multiple figure layers of target object and the target textures in the image to be processed are successively covered according to sequencing, and the image according to the transparence information of each pixel in each figure layer, after display covering.The application can to add the image three-dimensional sense enhancing after textures, can further increase user experience.

Description

A kind of image processing method, image processing apparatus and mobile terminal
Technical field
The application belongs to technical field of image processing more particularly to a kind of image processing method, image processing apparatus, movement Terminal and computer readable storage medium.
Background technique
Currently, many users like sharing the photo captured by oneself in social common platform (such as wechat, microblogging etc.), In order to keep the photo captured by oneself more interesting, it is generally the case that all can use repair figure software (such as Meitu Xiu Xiu) in institute Add textures on the photo of shooting, such as add beard on face, or cap had on the head of people etc..
But the current textures repaired in figure software can only be covered on above image to be processed, the figure after adding textures As untrue, three-dimensional sense is weaker.
Summary of the invention
It can in view of this, this application provides a kind of image processing method, image processing apparatus, mobile terminal and computers Storage medium is read, can solve the weaker technical problem of the image three-dimensional sense after adding textures in the prior art.
The application first aspect provides a kind of image processing method, comprising:
Image to be processed and target textures are obtained, above-mentioned target textures are made of multiple figure layers, wherein in each figure layer Each pixel be corresponding with transparence information;
It whether detects in above-mentioned image to be processed comprising target object;
If in above-mentioned image to be processed including target object:
Successively according to sequencing by multiple figure layers of target object and above-mentioned target textures in above-mentioned image to be processed Covering, and the image according to the transparence information of each pixel in each figure layer, after display covering.
The application second aspect provides a kind of image processing apparatus, comprising:
Module is obtained, for obtaining image to be processed and target textures, above-mentioned target textures are made of multiple figure layers, In each pixel in each figure layer be corresponding with transparence information;
Detection module, for whether detecting in above-mentioned image to be processed comprising target object;
Textures module, if for including target object in above-mentioned image to be processed, by the mesh in above-mentioned image to be processed Multiple figure layers of mark object and above-mentioned target textures are successively covered according to sequencing, and according to each pixel in each figure layer The transparence information of point, the image after display covering.
The application third aspect provides a kind of mobile terminal, including memory, processor and is stored in above-mentioned storage In device and the computer program that can run on above-mentioned processor, above-mentioned processor are realized as above when executing above-mentioned computer program The step of stating first aspect method.
The application fourth aspect provides a kind of computer readable storage medium, above-mentioned computer-readable recording medium storage There is computer program, realizes when above-mentioned computer program is executed by processor such as the step of above-mentioned first aspect method.
The 5th aspect of the application provides a kind of computer program product, and above-mentioned computer program product includes computer journey Sequence is realized when above-mentioned computer program is executed by one or more processors such as the step of above-mentioned first aspect method.
Therefore this application provides a kind of image processing methods, firstly, obtaining image to be processed and target patch Figure, wherein the target textures are made of multiple figure layers, wherein each pixel in each figure layer is corresponding with transparency letter Breath, that is to say that each pixel of each figure layer in the target textures is not only corresponding with grayscale information, is also corresponding with transparency Information;Secondly, whether including target object in the above-mentioned image to be processed of detection, for example, portrait, animal and/or plant etc.;Most Afterwards, if in the image to be processed including target object, by the target object and above-mentioned target textures in above-mentioned image to be processed Multiple figure layers successively covered according to sequencing, and according to the transparence information of each pixel in each figure layer, display Image after covering.Therefore, because textures are set multiple figure layers by the application, it therefore, can be by the mesh in image to be processed Mark object be placed between different figure layers, the top of multiple figure layers or be multiple figure layers bottom, so as to realize Mutual covering between target object and textures, can be by mutually blocking simulated target object between target object and textures Front-rear position relationship between textures, to form the stronger image of three-dimensional sense, therefore, the application can solve the prior art The weaker technical problem of image three-dimensional sense after middle addition textures, can further increase user experience.
Detailed description of the invention
It in order to more clearly explain the technical solutions in the embodiments of the present application, below will be to embodiment or description of the prior art Needed in attached drawing be briefly described, it should be apparent that, the accompanying drawings in the following description is only some of the application Embodiment for those of ordinary skill in the art without any creative labor, can also be according to these Attached drawing obtains other attached drawings.
Fig. 1 is a kind of implementation process schematic diagram for image processing method that the embodiment of the present application one provides;
Fig. 2 (a) is a kind of schematic diagram for textures that the embodiment of the present application one provides;
Fig. 2 (b) be the embodiment of the present application one provide target object is merged with target textures before and after interface show Show schematic diagram;
Fig. 3 is a kind of implementation process schematic diagram for image processing method that the embodiment of the present application two provides;
Fig. 4 be the embodiment of the present application two offer set the goal really it is successive suitable between object and multiple figure layers of target textures The flow diagram of order relation;
Fig. 5 be the embodiment of the present application two provide the image-region where target object is divided into one or more picture The schematic diagram of vegetarian refreshments set;
Fig. 6 is the interface display for adjusting positional relationship between target object and target textures that the embodiment of the present application two provides Schematic diagram;
Fig. 7 is another realization target object of the offer of the embodiment of the present application two and showing for the mutual occlusion method of target textures It is intended to;
Fig. 8 is a kind of structural schematic diagram for image processing apparatus that the embodiment of the present application three provides;
Fig. 9 is a kind of structural schematic diagram for mobile terminal that the embodiment of the present application four provides.
Specific embodiment
In being described below, for illustration and not for limitation, the tool of such as particular system structure, technology etc is proposed Body details, so as to provide a thorough understanding of the present application embodiment.However, it will be clear to one skilled in the art that there is no these specific The application also may be implemented in the other embodiments of details.In other situations, it omits to well-known system, device, electricity The detailed description of road and method, so as not to obscure the description of the present application with unnecessary details.
Image processing method provided by the embodiments of the present application can be adapted for mobile terminal, illustratively, above-mentioned mobile whole End includes but is not limited to: smart phone, tablet computer, learning machine, intelligent wearable device etc..
It should be appreciated that ought use in this specification and in the appended claims, term " includes " instruction is described special Sign, entirety, step, operation, the presence of element and/or component, but be not precluded one or more of the other feature, entirety, step, Operation, the presence or addition of element, component and/or its set.
It is also understood that mesh of the term used in this present specification merely for the sake of description specific embodiment And be not intended to limit the application.As present specification and it is used in the attached claims, unless on Other situations are hereafter clearly indicated, otherwise " one " of singular, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in present specification and the appended claims is Refer to any combination and all possible combinations of one or more of associated item listed, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt Be construed to " when ... " or " once " or " in response to determination " or " in response to detecting ".Similarly, phrase " if it is determined that " or " if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
In the specific implementation, mobile terminal described in the embodiment of the present application is including but not limited to such as with the sensitive table of touch Mobile phone, laptop computer or the tablet computer in face (for example, touch-screen display and/or touch tablet) etc it is other Portable device.It is to be further understood that in certain embodiments, above equipment is not portable communication device, but is had The desktop computer of touch sensitive surface (for example, touch-screen display and/or touch tablet).
In following discussion, the mobile terminal including display and touch sensitive surface is described.However, should manage Solution, mobile terminal may include that one or more of the other physical User of such as physical keyboard, mouse and/or control-rod connects Jaws equipment.
Mobile terminal supports various application programs, such as one of the following or multiple: drawing application program, demonstration application Program, word-processing application, website creation application program, disk imprinting application program, spreadsheet applications, game are answered With program, telephony application, videoconference application, email application, instant messaging applications, forging Refining supports application program, photo management application program, digital camera application program, digital camera application program, web-browsing to answer With program, digital music player application and/or video frequency player application program.
At least one of such as touch sensitive surface can be used in the various application programs that can be executed on mobile terminals Public physical user-interface device.It can be adjusted among applications and/or in corresponding application programs and/or change touch is quick Feel the corresponding information shown in the one or more functions and terminal on surface.In this way, terminal public physical structure (for example, Touch sensitive surface) it can support the various application programs with user interface intuitive and transparent for a user.
In addition, term " first ", " second " etc. are only used for distinguishing description, and should not be understood as in the description of the present application Indication or suggestion relative importance.
In order to illustrate the above-mentioned technical solution of the application, the following is a description of specific embodiments.
Embodiment one
A kind of image processing method provided below the embodiment of the present application one is described, and please refers to attached drawing 1, the application Image processing method in embodiment one includes:
In step s101, image to be processed and target textures are obtained, above-mentioned target textures are made of multiple figure layers, In each pixel in each figure layer be corresponding with transparence information;
In the embodiment of the present application, image to be processed and target textures are obtained first.Wherein, above-mentioned image to be processed can To be a certain frame image in the preview screen after mobile terminal starts camera or video camera, for example, user starts camera applications Program, camera a certain frame image collected;Alternatively, can be user by image captured by this ground camera, for example, with Family starts the camera application program in mobile terminal, utilizes image captured by camera application program;Alternatively, it is logical to can be user Other applications newly received image is crossed, for example, figure transmitted by other wechats contact person that user receives in wechat Picture;Alternatively, being also possible to the image that user downloads from internet, for example, user is by public operators network in browser The image of middle downloading;Alternatively, can also be a certain frame image in video, for example, cartoon that user is watched or TV A wherein frame image in play, is herein not construed as limiting the source of image to be processed.Above-mentioned target textures can be user and make by oneself The textures of justice selection, for example, mobile terminal is supplied to the multiple and different textures of user, user is customized from provided textures One is selected to put up figure as target textures;Alternatively, above-mentioned target textures are also possible to the textures of mobile terminal system fixed setting, User customized cannot change, and be not construed as limiting herein to the setting method of target textures.
In addition, in the embodiment of the present application, what which was made of multiple figure layers, each picture in each figure layer Vegetarian refreshments is all corresponding with transparence information, that is to say, each pixel in each figure layer can be indicated by ARGB, wherein ARGB is a kind of color representation mode, is that attached Alpha (transparency) information on the basis of rgb color intermediate scheme.Such as It is a kind of schematic diagram of textures provided by the embodiment of the present application one shown in Fig. 2 (a), the textures 201 in attached drawing 2 (a) include two A figure layer, respectively figure layer 202 and figure layer 203, wherein transparence information corresponding to each pixel in figure layer 202 is equal It is all-transparent state that all pixels point except doggie is removed for opaque state, in figure layer 203, in the region where doggie Pixel be full opaque state, figure layer 203 is covered in figure layer 202 formation textures 201, from textures 201 as can be seen that Doggie B can shelter from trees A, therefore, can be located at the feeling before trees A to a kind of doggie B of user, therefore, the application is mentioned The textures of confession can give a kind of three-dimensional sense of user.In addition, in order to further enhance the three-dimensional sense of textures, can also in figure layer 202 or Person is the shadow that doggie is added in figure layer 203, to further enhance the three-dimensional sense of textures, for example can be increased in figure layer 203 The pixel for adding the shadow of doggie, and being arranged in doggie region is full opaque state, in the shadow region of doggie Pixel have certain transparency (in order to increase the three-dimensional degree and the sense of reality of textures, can will be closer apart from doggie The transparency of the pixel in shadow region is arranged smaller, and the transparency of the pixel apart from the farther away shadow region of doggie is set That sets is larger), removing the pixel except doggie and doggie shadow is all-transparent state, and by the figure with doggie shadow Layer is covered in figure layer 202, to form textures 201, in this way, textures 201 can have more three-dimensional sense due to shade.
In step s 102, whether detect in above-mentioned image to be processed includes target object;
Under normal conditions, user only merely desire to will oneself be concerned about object (for example, portrait, animal etc.) and target textures into Therefore whether row fusion in the embodiment of the present application, can detect comprising target object in the image to be processed first, if Just image to be processed and above-mentioned target textures are merged comprising target object, if not including target object, not to this Image to be processed is handled.
In the embodiment of the present application, the scene detection model after can use training (is used for the nerve net of scene detection Network model) target object in the above-mentioned image to be processed of detection, alternatively, also can use other scene detections commonly used in the art Method detects the target object in above-mentioned image to be processed, is not construed as limiting herein to the detection method of target object.Wherein, on Stating target object can be the object frequently appeared in user picture, such as portrait, animal, fresh flower and/or cuisines etc..
In step s 103, if including target object in above-mentioned image to be processed, by the mesh in above-mentioned image to be processed Multiple figure layers of mark object and above-mentioned target textures are successively covered according to sequencing, and according to each pixel in each figure layer The transparence information of point, the image after display covering;
In the embodiment of the present application, if in image to be processed acquired in step S101 including target object, such as people The target object is then placed between multiple figure layers of above-mentioned target textures, the top of multiple figure layers or be multiple by picture The bottom of figure layer.Wherein, the successive covering relation of multiple figure layers in the target object and above-mentioned target textures, can by with Family customized setting, for example, user can customized setting the target object is placed on to multiple figure layers of the target textures It is bottom.
Wherein, in the embodiment of the present application, the above-mentioned target object by above-mentioned image to be processed and above-mentioned target textures Multiple figure layers successively covered according to sequencing, may include:
Detect the edge wheel profile of the target object in above-mentioned image to be processed;
By the saturating of the image-region except the closed area being made of above-mentioned edge wheel profile in above-mentioned image to be processed Lightness is set as fully transparent state, the image after obtaining transparency setting;
Multiple figure layers of image and above-mentioned target textures after the setting of above-mentioned transparency are successively covered according to sequencing Lid.
It is above-mentioned provided " by the target object and above-mentioned target textures in above-mentioned image to be processed in order to facilitate understanding Multiple figure layers are successively covered according to sequencing " specific method, be illustrated below with attached drawing 2 (a) and attached drawing 2 (b). As shown in Fig. 2 (a), it is assumed that acquired target textures are textures 201 in step S101, as shown in Fig. 2 (b), it is assumed that step Acquired image to be processed is that image 204 can detecte this by step S102 and wait for if target object is portrait in S101 Handling includes target object in image 204, then further executes the step S103 in the embodiment of the present application one, in step s 103 Image segmentation algorithm be can use to obtain the edge wheel profile of target object in image 204, it then will be in the image to be processed The transparency of all pixels point except edge wheel profile is set as all-transparent state, will close composed by the edge wheel profile The transparency of all pixels point in region, which is set as opaque state, (in addition, in order to form some special efficacy images, can also set The pixel set in closed area composed by the edge wheel profile has certain transparency), after generating transparency setting Image, the image after then the transparency being arranged is placed between figure layer 202 and figure layer 203, to form image 205, as shown in Fig. 2 (b).From, as can be seen that portrait can shelter from the part trees in figure layer 201, doggie can hide in image 205 The partial region of portrait is blocked, therefore, can give a kind of portrait of user behind doggie, and the sense before the trees being blocked Feel, so as to by mutually blocking so that the certain three-dimensional sense of image after stick picture disposing.In addition, after in order to make stick picture disposing Image in target object preferably merged with textures, can by image to be processed target object fringe region carry out mould Gelatinization processing.
Since textures are set multiple figure layers by the embodiment of the present application one, it can be by the target in image to be processed Object is placed between different figure layers, the top of multiple figure layers or be multiple figure layers bottom, so as to realize mesh Mark the mutual covering between object and textures, can by between target object and textures mutually block simulated target object with Front-rear position relationship between textures, to form the stronger image of three-dimensional sense, therefore, the application be can solve in the prior art The weaker technical problem of image three-dimensional sense after adding textures, can further increase user experience.
Embodiment two
Another image processing method provided below the embodiment of the present application two is described, and please refers to attached drawing 3, this Shen Please the image processing method in embodiment two include:
In step S301, image to be processed and target textures are obtained, above-mentioned target textures are made of multiple figure layers, In each pixel in each figure layer be corresponding with transparence information;
In step s 302, whether detect in above-mentioned image to be processed includes target object;
In the embodiment of the present application two, above-mentioned steps S301-S302 is identical as the step S101-S102 in embodiment one, For details, reference can be made to the descriptions of embodiment one, and details are not described herein again.
In step S303, if including target object in above-mentioned image to be processed, mesh in above-mentioned image to be processed is detected Mark the depth information of the image-region where object;
In the embodiment of the present application, above-mentioned depth information can be the vertical range of above-mentioned target object and observer, than Such as, if above-mentioned image to be processed is image captured by mobile terminal, above-mentioned depth information is in above-mentioned image to be processed The vertical range of plane where target object and mobile terminal.
In practice, people is mainly the depth information for relying on two eyes to determine observation object, therefore, if above-mentioned wait locate Managing image is image captured by mobile terminal, then another auxiliary camera can be arranged on the mobile terminal, mobile whole While end obtains the image to be processed, another assistant images are shot using the auxiliary camera, due to above-mentioned auxiliary camera There is a certain distance with the camera for shooting above-mentioned image to be processed, therefore, the two cameras have certain parallax, make It obtains above-mentioned image to be processed to be different with above-mentioned assistant images, therefore, can be calculated according to the assistant images above-mentioned to be processed Vertical range of the target object apart from plane where mobile terminal in image.It is using the depth information that dual camera calculates image The prior art, details are not described herein again.
The method for calculating image depth information using dual camera provided by above-mentioned, needs to obtain above-mentioned image to be processed Corresponding another assistant images, and in some cases, for example image to be processed is that the image downloaded on the net or wechat receive Image when, be difficult to get with another assistant images corresponding to the image to be processed, therefore, in the embodiment of the present application In, the vertical range of target object and observer can also be estimated according to the relative size of each object in image to be processed, The calculation method of depth information is not construed as limiting herein.
In step s 304, according to above-mentioned depth information and apart from description information, determine above-mentioned target object with it is above-mentioned Sequencing relationship between multiple figure layers of target textures;
In the embodiment of the present application, target textures acquired in step S301 are corresponding with for describing each figure layer and observation Person's vertical range apart from description information.For example, if target textures are textures 201 shown in attached drawing 2 (a), 201 institute of textures It is corresponding can be with apart from description information are as follows: vertical range of the figure layer 202 apart from observer is 10 meters, and figure layer 203 is apart from observer Vertical range be 6 meters.Under normal conditions, the vertical range between the same figure layer each scene for being included and observer is not It is identical, for example the vertical range of trees A and observer in figure layer 201 is necessarily closer compared to remaining trees, therefore, Wo Menke To calculate the average value of each scene and observer's vertical range in figure layer, using the average value hanging down as the figure layer and observer Straight distance.
In the embodiment of the present application, step S304 can be first by each pixel of the image-region where target object The corresponding depth information of point is averaged, for example target object is portrait, and vertical range of the manpower apart from observer is 1 meter, The vertical range of personal distance observer is 1.5 meters, then available average value is 1.25 meters;Then right according to target textures institute Answer apart from description information, determine target object and each figure layer sequencing relationship, for example target textures are in attached drawing 2 (a) Textures 201, vertical range of the figure layer 203 apart from observer be 6 meters, vertical range of the figure layer 202 apart from observer be 10 meters, Since the above-mentioned average value being calculated is 1.25 meters, then available portrait is in textures 203 and the top of figure layer 202. In addition, in the embodiment of the present application, step S304 can also be determined by following steps S401-S404 above-mentioned target object with Sequencing relationship between multiple figure layers of above-mentioned target textures, as shown in Figure 4:
In step S401, apart from description information according to corresponding to above-mentioned target textures, before obtaining above-mentioned target textures The position range of side, above-mentioned target the textures position range between figure layer and the position model at above-mentioned target textures rear two-by-two It encloses;
In the embodiment of the present application, the position range in front of above-mentioned target textures, two-by-two the position range between figure layer with And may include each figure layer in the position range at target textures rear at a distance from observer, each figure layer can also not included At a distance from observer, for example, if target textures are textures 201 shown in attached drawing 2 (a), and distance corresponding to the textures 201 Description information is that vertical range of the figure layer 202 apart from observer is 10 meters, and vertical range of the figure layer 203 apart from observer is 6 meters, Then the position range in front of the target textures can be the range greater than 0 less than or equal to 6 meters, or greater than 0 less than 6 meters Range.In addition, intersection is not present in each position range determined by the step.For each position in clearer description step Method of determining range is set, illustrated below:
If target textures are textures 201 shown in attached drawing 2 (a), and apart from description information corresponding to the textures 201 are as follows: Vertical range of the figure layer 202 apart from observer is 10 meters, and vertical range of the figure layer 203 apart from observer is 6 meters.Then: above-mentioned mesh Position range in front of labeling figure can be the range that is less than or equal to 6 meters greater than 0, the above-mentioned target textures position between figure layer two-by-two It sets and may range from being greater than 6 ranges for being less than or equal to 10 meters, the position range at above-mentioned target textures rear can be for greater than 10 meters Range;Alternatively, the position range in front of above-mentioned target textures can be the range greater than 0 less than 6 meters, above-mentioned target textures two Position range between two figure layers can be the range greater than 6 less than 10 meters, and the position range at above-mentioned target textures rear can be with For the range greater than 10 meters;Alternatively, the position range in front of above-mentioned target textures can be the range greater than 0 less than or equal to 6 meters, The position range between figure layer can be the range greater than 6 less than 10 meters, above-mentioned target textures rear to above-mentioned target textures two-by-two Position range can be range, etc. greater than 10 meters.
If target textures include three figure layers, respectively a figure layer, b figure layer and c figure layer, and corresponding apart from description information Are as follows: vertical range of a figure layer apart from observer is 10 meters, and vertical range of the b figure layer apart from observer is 6 meters, and c figure layer distance is seen 2 meters of the person of examining.Then: the position range in front of the target textures can be the range greater than 0 less than or equal to 2 meters, the target textures two Position range between two figure layers can be to be less than or equal to 6 meters greater than 2 and the range greater than 6 less than or equal to 10 meters, the target The position range at textures rear can be the range, etc. greater than 10 meters.
In step S402, according in front of above-mentioned target textures position range, above-mentioned target textures are two-by-two between figure layer Position range, above-mentioned target textures rear position range and above-mentioned image to be processed in image district where target object Image-region where target object in above-mentioned image to be processed is divided into one or more pixel by the depth information in domain Set, and determine the front-rear position relationship between above-mentioned pixel collection and multiple figure layers of above-mentioned target textures, wherein it is located at The co-located range of pixel in same pixel collection;
For clearer description step, below with attached drawing 5 for example:
Assuming that target object is portrait, the image to be processed of acquisition is image 501.Then we can be first according to step S303 calculates depth information corresponding to the image-region in the image 501 where portrait, that is, obtains the image where the portrait Vertical range of each pixel apart from observer in region, illustratively, as shown in figure 5, the distance observation of A point can be calculated The vertical range of person is 6.5 meters, vertical range of the B point apart from observer is 5.5 meters, vertical range of the C point apart from observer is Vertical range of the 5.8 meters and D points apart from observer is 6.2 meters and (carrys out tetra- pictures of A, B, C, D for ease of description, only enumerating here Vertical range of the vegetarian refreshments apart from observer).
Assuming that our acquired target textures in step S301 are the textures 201 in attached drawing 2 (a), in step S401 In, the position range (for convenient for subsequent descriptions, it is assumed that be first position range) in front of identified target textures is small greater than 0 In being equal to 6 meters, position range (for convenient for subsequent descriptions, it is assumed that be second position range) of the target textures two-by-two between figure layer is It is less than or equal to 10 meters greater than 6, the position range (for convenient for subsequent descriptions, it is assumed that be the third place range) at target textures rear is Greater than 10 meters.Then we can be according to the depth information of the step S303 portrait calculated, in the image-region where lookup portrait Vertical range apart from observer falls into all pixels point of first position range, falls into all pixels point of second position range And fall into all pixels point of the third place range.In attached example shown in fig. 5, pixel A and pixel D fall into second Position range, pixel B and C fall into first position range, traverse the image-region where the target object in the image 501 All pixels point determines the position range that all pixels point of the image-region in image 501 where target object should be divided into, As shown in fig. 5, it is assumed that each pixel in region M and region N is (for convenient for subsequent descriptions, it is assumed that region M and region N All pixels point be the first pixel collection) be located at first position range, its afterimage of image-region where the target object Vegetarian refreshments (for convenient for subsequent descriptions, it is assumed that the rest of pixels point of image-region where the target object is the second pixel collection) position In second position range.
Then, and the front-rear position relationship between each pixel collection and multiple figure layers of above-mentioned target textures is determined, Such as in the example above, the first pixel collection is located at before figure layer 203, the second pixel collection be located at figure layer 202 with Between figure layer 203.
In step S403, pixel point image corresponding to each pixel collection is obtained, wherein each pixel point image For the transparency of the rest of pixels point in above-mentioned image to be processed in addition to corresponding pixel collection is disposed as completely thoroughly The image of bright state;
In the citing of step S402, need to obtain pixel point image corresponding to the first pixel collection (for convenient for after Continuous description, it is assumed that be the first pixel point image) and the second pixel collection corresponding to pixel point image (for convenient for subsequent Description, it is assumed that be the second pixel point image).The transparency of all pixels point of the first pixel collection will be removed in image 501 It is set as all-transparent state, to obtain the first pixel point image;All pictures of the second pixel collection will be removed in image 501 The transparency of vegetarian refreshments is set as all-transparent state, to obtain the second pixel point image.
In step s 404, according to the front-rear position between above-mentioned pixel collection and multiple figure layers of above-mentioned target textures Relationship determines the sequencing relationship between each pixel point image and multiple figure layers of above-mentioned target textures;
Since step S402 has determined that the anteroposterior position between each pixel collection and multiple figure layers of target textures Relationship is set, hence, it can be determined that the front-rear position relationship between corresponding pixel point image and multiple figure layers of target textures.
In the citing of step S402, the first pixel collection is located at before figure layer 203, and the second pixel collection is located at Between figure layer 202 and figure layer 203, then it can be concluded that the first pixel point image is located at before figure layer 203, the second pixel point image Between figure layer 202 and figure layer 203.
In step S305, according to above-mentioned sequencing relationship, by the multiple of above-mentioned target object and above-mentioned target textures Figure layer is successively covered, and the image according to the transparence information of each pixel in each figure layer, after display covering;
In the embodiment of the present application, if it is successive between the target object in step S304 and multiple figure layers of target textures Ordinal relation is obtained according to step S401-S404, then step S305 each pixel according to determined by step S404 The successive positional relationship of point image and multiple figure layers of target textures, is successively covered, for example, in step s 404, obtaining First pixel point image is located at before figure layer 203, and the second pixel point image is between figure layer 202 and figure layer 203, then by One pixel point image is covered on above figure layer 203, and figure layer 203 is covered on above the second pixel point image, the second pixel point image It is covered on above figure layer 202, then can show the image after successively covering on the display screen of mobile terminal.
In addition, can also make between the customized adjusting target object of user and each figure layer in the embodiment of the present application two Relative positional relationship, as shown in Figure 6, it is assumed that utilize the available target object of the technical solution of the embodiment of the present application two and mesh The fused image of labeling figure is image 601, before alloing customized the adjustings target object of user and each figure layer Front-rear position relationship, the adjusting menu that can be popped up on the right side of image 601, user can be by moving the upper bottom of circle O It sets, to adjust the positional relationship between target object and each figure layer.In addition, in the position for adjusting target object and each figure layer When relationship, mobile terminal can zoom in or out target object accordingly, for example, target object is moved to curtain by user When below, which can be subjected to a certain proportion of diminution.
In order to realize mutually blocking between target object and target textures, the embodiment of the present application two provides following technology Scheme, it may be assumed that according to the depth information of target object, generate one or more pixel point images, and by each pixel point image with Each figure layer successively covers in target textures, to realize the successive hiding relation of target textures and target object.In addition, being The successive hiding relation for realizing target textures and target object, can also paste according to the depth information and target of target object Figure apart from description information, determine in target object by region that target textures block and the area that do not blocked by target textures Then domain will be set as all-transparent state by the transparency in the region that target textures block in target object, will be in target object Opaque state is not set as by the transparency in the region that target textures block, finally, by the target pair after transparency setting As being covered on target textures, to realize the front and back hiding relation between target object and target textures.As shown in fig. 6, mesh Labeling figure includes golden cudgel shown in gray background shown in figure layer 1 and figure layer 2, after getting target object, Ke Yigen According to target object depth information and target textures apart from description information, determine in target object by figure layer 1 or figure layer 2 The part blocked, and not by figure layer 1 and the part do not blocked by figure layer 2, then by target object by figure layer 1 or figure layer 2 parts blocked, such as the transparency of region A are set as all-transparent state, and will not be schemed not by figure layer 1 and in target object The part that layer 2 blocks, such as the transparency of region B are set as opaque state, finally, the target object after transparency is arranged It is covered in target textures, so that it may form golden cudgel and shelter from region A, but the state that region B is not blocked, to also can Enough form a kind of more three-dimensional image.
The embodiment of the present application two can by detecting the depth of view information of image-region where target object in image to be processed, Automatically target object is merged with each figure layer of target textures, and technical solution provided by embodiment one needs to use Relative positional relationship between the customized specified target object in family and each figure layer, therefore, compared to embodiment one, the application is real Apply in technical solution provided by example two that the operation is more convenient by user, and the embodiment of the present application two passes through depth of view information and realizes Target object is merged with target textures, and the syncretizing effect between target object and target textures can be made truer.Cause This, the embodiment of the present application two can further increase user experience compared to embodiment one.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present application constitutes any limit It is fixed.
Embodiment three
The embodiment of the present application three provides a kind of image processing apparatus, for purposes of illustration only, only showing relevant to the application Part, image processing apparatus 700 as shown in Figure 8 include,
Module 701 is obtained, for obtaining image to be processed and target textures, above-mentioned target textures are by multiple figure layer groups At wherein each pixel in each figure layer is corresponding with transparence information;
Detection module 702, for whether detecting in above-mentioned image to be processed comprising target object;
Textures module 703, if for including target object in above-mentioned image to be processed, it will be in above-mentioned image to be processed Multiple figure layers of target object and above-mentioned target textures are successively covered according to sequencing, and according to each picture in each figure layer The transparence information of vegetarian refreshments, the image after display covering.
Optionally, above-mentioned textures module 703 includes:
Edge detection unit, for detecting the edge wheel profile of the target object in above-mentioned image to be processed;
Transparency setting unit, for by the closed area being made of above-mentioned edge wheel profile in above-mentioned image to be processed Except the transparency of image-region be set as fully transparent state, the image after obtaining transparency setting;
Capping unit, multiple figure layers for image and above-mentioned target textures after above-mentioned transparency is arranged are according to elder generation Sequence successively covers afterwards.
Optionally, above-mentioned capping unit includes:
Customized subelement, for obtain the customized setting of user above-mentioned transparency be arranged after image and above-mentioned target Sequencing relationship between multiple figure layers of textures;
Above-mentioned transparency is arranged for the above-mentioned sequencing relationship according to the customized setting of user for covering subelement Multiple figure layers of image and above-mentioned target textures afterwards successively cover.
Optionally, above-mentioned target textures are corresponding with for describing each figure layer with observer's distance apart from description information; Correspondingly, above-mentioned textures module 703 includes:
Depth detection unit, the depth for detecting the image-region in above-mentioned image to be processed where target object are believed Breath;
Successive determination unit, for determining above-mentioned target pair according to above-mentioned depth information and above-mentioned apart from description information As above-mentioned target textures multiple figure layers between sequencing relationship;
Object figure layer capping unit, for according to above-mentioned sequencing relationship, above-mentioned target object and above-mentioned target to be pasted Multiple figure layers of figure are successively covered.
Optionally, above-mentioned successive determination unit includes:
Position range determines subelement, for, apart from description information, being obtained above-mentioned according to corresponding to above-mentioned target textures Position range, above-mentioned target textures in front of target the textures position range between figure layer and above-mentioned target textures rear two-by-two Position range;
Pixel collection determines subelement, for according to position range, the above-mentioned target textures in front of above-mentioned target textures Two-by-two the position range between figure layer, above-mentioned target textures rear position range and above-mentioned image to be processed in target object The depth information of the image-region at place, by the image-region where target object in above-mentioned image to be processed be divided into one or The multiple pixel collections of person, and determine that the front-rear position between above-mentioned pixel collection and multiple figure layers of above-mentioned target textures is closed System, wherein the co-located range of pixel in same pixel collection;
Pixel point image determines subelement, for obtaining pixel point image corresponding to each pixel collection, wherein often A pixel point image is that the transparency of the rest of pixels point in above-mentioned image to be processed in addition to corresponding pixel collection is equal It is set as the image of fully transparent state;
Subelement is successively determined, for according to before between above-mentioned pixel collection and multiple figure layers of above-mentioned target textures Positional relationship afterwards determines the sequencing relationship between each pixel point image and multiple figure layers of above-mentioned target textures;
Correspondingly, above-mentioned object figure layer capping unit is specifically used for: according to each pixel point image and above-mentioned target textures Multiple figure layers between sequencing relationship, each pixel point image and multiple figure layers of above-mentioned target textures are successively covered Lid.
Optionally, above-mentioned image processing apparatus further include:
It is blurred module, carries out Fuzzy processing for the fringe region to above-mentioned target object.
It should be noted that the contents such as information exchange, implementation procedure between above-mentioned apparatus/unit, due to the application Embodiment of the method is based on same design, concrete function and bring technical effect, for details, reference can be made to embodiment of the method part, this Place repeats no more.
Example IV
Fig. 9 is the schematic diagram for the mobile terminal that the embodiment of the present application four provides.As shown in figure 9, the mobile end of the embodiment End 8 includes: processor 80, memory 81 and is stored in the meter that can be run in above-mentioned memory 81 and on above-mentioned processor 80 Calculation machine program 82.Above-mentioned processor 80 realizes the step in above-mentioned each embodiment of the method when executing above-mentioned computer program 82, Such as step S101 to S103 shown in FIG. 1.Alternatively, above-mentioned processor 80 realized when executing above-mentioned computer program 82 it is above-mentioned each The function of each module/unit in Installation practice, such as the function of module 701 to 703 shown in Fig. 8.
Illustratively, above-mentioned computer program 82 can be divided into one or more module/units, said one or Multiple module/units are stored in above-mentioned memory 81, and are executed by above-mentioned processor 80, to complete the application.Above-mentioned one A or multiple module/units can be the series of computation machine program instruction section that can complete specific function, which is used for Implementation procedure of the above-mentioned computer program 82 in above-mentioned mobile terminal 8 is described.For example, above-mentioned computer program 82 can be divided It is as follows to be cut into acquisition module, detection module and textures module, each module concrete function:
Image to be processed and target textures are obtained, above-mentioned target textures are made of multiple figure layers, wherein in each figure layer Each pixel be corresponding with transparence information;
It whether detects in above-mentioned image to be processed comprising target object;
If in above-mentioned image to be processed including target object:
Successively according to sequencing by multiple figure layers of target object and above-mentioned target textures in above-mentioned image to be processed Covering, and the image according to the transparence information of each pixel in each figure layer, after display covering.
Above-mentioned mobile terminal 8 can be smart phone, tablet computer, learning machine, intelligent wearable device etc. and calculate equipment.On Stating mobile terminal may include, but be not limited only to, processor 80, memory 81.It will be understood by those skilled in the art that Fig. 9 is only It is the example of mobile terminal 8, does not constitute the restriction to mobile terminal 8, may include components more more or fewer than diagram, or Person combines certain components or different components, such as above-mentioned mobile terminal can also include input-output equipment, network insertion Equipment, bus etc..
Alleged processor 80 can be central processing unit (Central Processing Unit, CPU), can also be Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field- Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic, Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor Deng.
Above-mentioned memory 81 can be the internal storage unit of above-mentioned mobile terminal 8, such as the hard disk or interior of mobile terminal 8 It deposits.Above-mentioned memory 81 is also possible to the External memory equipment of above-mentioned mobile terminal 8, such as be equipped on above-mentioned mobile terminal 8 Plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card dodge Deposit card (Flash Card) etc..Further, above-mentioned memory 81 can also both include the storage inside list of above-mentioned mobile terminal 8 Member also includes External memory equipment.Above-mentioned memory 81 is for storing needed for above-mentioned computer program and above-mentioned mobile terminal Other programs and data.Above-mentioned memory 81 can be also used for temporarily storing the data that has exported or will export.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different Functional unit, module are completed, i.e., the internal structure of above-mentioned apparatus is divided into different functional unit or module, more than completing The all or part of function of description.Each functional unit in embodiment, module can integrate in one processing unit, can also To be that each unit physically exists alone, can also be integrated in one unit with two or more units, it is above-mentioned integrated Unit both can take the form of hardware realization, can also realize in the form of software functional units.In addition, each function list Member, the specific name of module are also only for convenience of distinguishing each other, the protection scope being not intended to limit this application.Above system The specific work process of middle unit, module, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed Scope of the present application.
In embodiment provided herein, it should be understood that disclosed device/mobile terminal and method, it can be with It realizes by another way.For example, device described above/mobile terminal embodiment is only schematical, for example, on The division of module or unit is stated, only a kind of logical function partition, there may be another division manner in actual implementation, such as Multiple units or components can be combined or can be integrated into another system, or some features can be ignored or not executed.Separately A bit, shown or discussed mutual coupling or direct-coupling or communication connection can be through some interfaces, device Or the INDIRECT COUPLING or communication connection of unit, it can be electrical property, mechanical or other forms.
Above-mentioned unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of software functional units.
If above-mentioned integrated module/unit be realized in the form of SFU software functional unit and as independent product sale or In use, can store in a computer readable storage medium.Based on this understanding, the application realizes above-mentioned implementation All or part of the process in example method, can also instruct relevant hardware to complete, above-mentioned meter by computer program Calculation machine program can be stored in a computer readable storage medium, the computer program when being executed by processor, it can be achieved that on The step of stating each embodiment of the method.Wherein, above-mentioned computer program includes computer program code, above-mentioned computer program generation Code can be source code form, object identification code form, executable file or certain intermediate forms etc..Above-mentioned computer-readable medium It may include: any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic that can carry above-mentioned computer program code Dish, CD, computer storage, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that above-mentioned The content that computer-readable medium includes can carry out increasing appropriate according to the requirement made laws in jurisdiction with patent practice Subtract, such as does not include electric carrier signal and electricity according to legislation and patent practice, computer-readable medium in certain jurisdictions Believe signal.
Above above-described embodiment is only to illustrate the technical solution of the application, rather than its limitations;Although referring to aforementioned reality Example is applied the application is described in detail, those skilled in the art should understand that: it still can be to aforementioned each Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified Or replacement, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution should all Comprising within the scope of protection of this application.

Claims (10)

1. a kind of image processing method characterized by comprising
Image to be processed and target textures are obtained, the target textures are made of multiple figure layers, wherein every in each figure layer A pixel is corresponding with transparence information;
It whether detects in the image to be processed comprising target object;
If in the image to be processed including target object:
Multiple figure layers of target object and the target textures in the image to be processed are successively covered according to sequencing, And the image according to the transparence information of each pixel in each figure layer, after display covering.
2. image processing method as described in claim 1, which is characterized in that the target pair by the image to be processed As successively being covered with multiple figure layers of the target textures according to sequencing, comprising:
Detect the edge wheel profile of the target object in the image to be processed;
By the transparency of the image-region except the closed area being made of the edge wheel profile in the image to be processed It is set as fully transparent state, the image after obtaining transparency setting;
Multiple figure layers of image and the target textures after transparency setting are successively covered according to sequencing.
3. image processing method as claimed in claim 2, which is characterized in that it is described by the transparency setting after image with And multiple figure layers of the target textures are successively covered according to sequencing, comprising:
Obtain the customized setting of user the transparency setting after image and the target textures multiple figure layers between Sequencing relationship;
Image and the target according to the sequencing relationship of the customized setting of user, after the transparency is arranged Multiple figure layers of textures successively cover.
4. image processing method as described in claim 1, which is characterized in that the target textures are corresponding with each for describing Figure layer is with observer's distance apart from description information;
Correspondingly, the target object by the image to be processed is with multiple figure layers of the target textures according to successively suitable Sequence successively covers, comprising:
Detect the depth information of the image-region in the image to be processed where target object;
According to the depth information and described apart from description information, the multiple of the target object and the target textures are determined Sequencing relationship between figure layer;
According to the sequencing relationship, multiple figure layers of the target object and the target textures are successively covered.
5. image processing method as claimed in claim 4, which is characterized in that it is described according to the depth information and it is described away from From description information, the sequencing relationship between the target object and multiple figure layers of the target textures is determined, comprising:
Apart from description information according to corresponding to the target textures, position range in front of the target textures, described is obtained Target the textures position range between figure layer and the position range at target textures rear two-by-two;
According to position range, the target textures position range between figure layer, the mesh two-by-two in front of the target textures The depth information of image-region in the position range at labeling figure rear and the image to be processed where target object, by institute It states the image-region in image to be processed where target object and is divided into one or more pixel collection, and determine the picture Front-rear position relationship between vegetarian refreshments set and multiple figure layers of the target textures, wherein be located in same pixel collection The co-located range of pixel;
Pixel point image corresponding to each pixel collection is obtained, wherein each pixel point image is by the image to be processed In the transparency of rest of pixels point in addition to corresponding pixel collection be disposed as the image of fully transparent state;
According to the front-rear position relationship between the pixel collection and multiple figure layers of the target textures, each pixel is determined Sequencing relationship between point image and multiple figure layers of the target textures;
Correspondingly, described according to the sequencing relationship, by multiple figure layers of the target object and the target textures into Row successively covers, comprising:
According to the sequencing relationship between each pixel point image and multiple figure layers of the target textures, by each pixel Multiple figure layers of image and the target textures successively cover.
6. the image processing method as described in any one of claim 1 to 5, which is characterized in that it is described by described wait locate Multiple figure layers of target object and the target textures in reason image are successively covered according to sequencing, and according to each figure layer In each pixel transparence information, display covering after image the step of before, further includes:
Fuzzy processing is carried out to the fringe region of the target object.
7. a kind of image processing apparatus characterized by comprising
Module is obtained, for obtaining image to be processed and target textures, the target textures are made of multiple figure layers, wherein often Each pixel in a figure layer is corresponding with transparence information;
Detection module, for whether detecting in the image to be processed comprising target object;
Textures module, if for including target object in the image to be processed, by the target pair in the image to be processed As successively being covered with multiple figure layers of the target textures according to sequencing, and according to each pixel in each figure layer Transparence information, the image after display covering.
8. image processing apparatus as claimed in claim 7, which is characterized in that the textures module includes:
Edge detection unit, for detecting the edge wheel profile of the target object in the image to be processed;
Transparency setting unit, for will be except the closed area being made of the edge wheel profile in the image to be processed The transparency of image-region be set as fully transparent state, the image after obtaining transparency setting;
Capping unit, multiple figure layers for image and the target textures after the transparency is arranged are according to successively suitable Sequence successively covers.
9. a kind of mobile terminal, including memory, processor and storage are in the memory and can be on the processor The computer program of operation, which is characterized in that the processor realizes such as claim 1 to 6 when executing the computer program The step of any one the method.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists In when the computer program is executed by processor the step of any one of such as claim 1 to 6 of realization the method.
CN201810864000.4A 2018-08-01 2018-08-01 Image processing method, image processing device and mobile terminal Active CN109064390B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810864000.4A CN109064390B (en) 2018-08-01 2018-08-01 Image processing method, image processing device and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810864000.4A CN109064390B (en) 2018-08-01 2018-08-01 Image processing method, image processing device and mobile terminal

Publications (2)

Publication Number Publication Date
CN109064390A true CN109064390A (en) 2018-12-21
CN109064390B CN109064390B (en) 2023-04-07

Family

ID=64832278

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810864000.4A Active CN109064390B (en) 2018-08-01 2018-08-01 Image processing method, image processing device and mobile terminal

Country Status (1)

Country Link
CN (1) CN109064390B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109741415A (en) * 2019-01-02 2019-05-10 中国联合网络通信集团有限公司 Figure layer method for sorting, device and terminal device
CN110705526A (en) * 2019-10-25 2020-01-17 云南电网有限责任公司电力科学研究院 Unmanned aerial vehicle-based tree obstacle clearing method, device and system
CN110825993A (en) * 2019-10-30 2020-02-21 北京字节跳动网络技术有限公司 Picture display method and device and electronic equipment
CN111300816A (en) * 2020-03-20 2020-06-19 济宁学院 Smooth printing method based on photocuring 3D printing
CN111768422A (en) * 2020-01-16 2020-10-13 北京沃东天骏信息技术有限公司 Edge detection processing method, device, equipment and storage medium
CN112019702A (en) * 2019-05-31 2020-12-01 北京嗨动视觉科技有限公司 Image processing method and device and video processor
CN112070674A (en) * 2020-09-04 2020-12-11 北京伟杰东博信息科技有限公司 Image synthesis method and device
CN112449167A (en) * 2020-11-13 2021-03-05 深圳市火乐科技发展有限公司 Image sawtooth elimination and image display method and device
CN112583996A (en) * 2019-09-29 2021-03-30 北京嗨动视觉科技有限公司 Video processing method and video processing device
CN112672056A (en) * 2020-12-25 2021-04-16 维沃移动通信有限公司 Image processing method and device
CN113674435A (en) * 2021-07-27 2021-11-19 阿里巴巴新加坡控股有限公司 Image processing method, electronic map display method and device and electronic equipment
WO2021259093A1 (en) * 2020-06-24 2021-12-30 中兴通讯股份有限公司 Image display method and apparatus, computer readable storage medium, and electronic apparatus
CN114125304A (en) * 2021-11-30 2022-03-01 维沃移动通信有限公司 Shooting method and device thereof
CN114416260A (en) * 2022-01-20 2022-04-29 北京字跳网络技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999943A (en) * 2012-12-21 2013-03-27 吴心妮 Method and system for image processing
CN105493152A (en) * 2013-07-22 2016-04-13 株式会社得那 Image processing device and image processing program
CN105578026A (en) * 2015-07-10 2016-05-11 宇龙计算机通信科技(深圳)有限公司 Photographing method and user terminal
CN105701762A (en) * 2015-12-30 2016-06-22 联想(北京)有限公司 Picture processing method and electronic equipment
CN106293655A (en) * 2015-05-20 2017-01-04 时空创意(北京)科技文化发展有限公司 A kind of image processing method realizing pinup picture and wiping figure between the terminals
CN107071555A (en) * 2017-03-31 2017-08-18 奇酷互联网络科技(深圳)有限公司 The loading method of image, device and electronic equipment in VR videos
CN107563962A (en) * 2017-09-08 2018-01-09 北京奇虎科技有限公司 Video data real-time processing method and device, computing device
CN108174082A (en) * 2017-11-30 2018-06-15 维沃移动通信有限公司 The method and mobile terminal of a kind of image taking

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102999943A (en) * 2012-12-21 2013-03-27 吴心妮 Method and system for image processing
CN105493152A (en) * 2013-07-22 2016-04-13 株式会社得那 Image processing device and image processing program
CN106293655A (en) * 2015-05-20 2017-01-04 时空创意(北京)科技文化发展有限公司 A kind of image processing method realizing pinup picture and wiping figure between the terminals
CN105578026A (en) * 2015-07-10 2016-05-11 宇龙计算机通信科技(深圳)有限公司 Photographing method and user terminal
CN105701762A (en) * 2015-12-30 2016-06-22 联想(北京)有限公司 Picture processing method and electronic equipment
CN107071555A (en) * 2017-03-31 2017-08-18 奇酷互联网络科技(深圳)有限公司 The loading method of image, device and electronic equipment in VR videos
CN107563962A (en) * 2017-09-08 2018-01-09 北京奇虎科技有限公司 Video data real-time processing method and device, computing device
CN108174082A (en) * 2017-11-30 2018-06-15 维沃移动通信有限公司 The method and mobile terminal of a kind of image taking

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109741415A (en) * 2019-01-02 2019-05-10 中国联合网络通信集团有限公司 Figure layer method for sorting, device and terminal device
CN109741415B (en) * 2019-01-02 2023-08-08 中国联合网络通信集团有限公司 Picture layer arrangement method and device and terminal equipment
CN112019702B (en) * 2019-05-31 2023-08-25 北京嗨动视觉科技有限公司 Image processing method, device and video processor
CN112019702A (en) * 2019-05-31 2020-12-01 北京嗨动视觉科技有限公司 Image processing method and device and video processor
CN112583996A (en) * 2019-09-29 2021-03-30 北京嗨动视觉科技有限公司 Video processing method and video processing device
CN110705526A (en) * 2019-10-25 2020-01-17 云南电网有限责任公司电力科学研究院 Unmanned aerial vehicle-based tree obstacle clearing method, device and system
CN110705526B (en) * 2019-10-25 2023-08-08 云南电网有限责任公司电力科学研究院 Tree obstacle clearing method, device and system based on unmanned aerial vehicle
CN110825993A (en) * 2019-10-30 2020-02-21 北京字节跳动网络技术有限公司 Picture display method and device and electronic equipment
CN111768422A (en) * 2020-01-16 2020-10-13 北京沃东天骏信息技术有限公司 Edge detection processing method, device, equipment and storage medium
CN111300816A (en) * 2020-03-20 2020-06-19 济宁学院 Smooth printing method based on photocuring 3D printing
WO2021259093A1 (en) * 2020-06-24 2021-12-30 中兴通讯股份有限公司 Image display method and apparatus, computer readable storage medium, and electronic apparatus
US11948537B2 (en) 2020-06-24 2024-04-02 Zte Corporation Image display method and apparatus, computer readable storage medium, and electronic apparatus
CN112070674B (en) * 2020-09-04 2021-11-02 北京康吉森技术有限公司 Image synthesis method and device
CN112070674A (en) * 2020-09-04 2020-12-11 北京伟杰东博信息科技有限公司 Image synthesis method and device
CN112449167A (en) * 2020-11-13 2021-03-05 深圳市火乐科技发展有限公司 Image sawtooth elimination and image display method and device
CN112672056A (en) * 2020-12-25 2021-04-16 维沃移动通信有限公司 Image processing method and device
CN113674435A (en) * 2021-07-27 2021-11-19 阿里巴巴新加坡控股有限公司 Image processing method, electronic map display method and device and electronic equipment
CN114125304A (en) * 2021-11-30 2022-03-01 维沃移动通信有限公司 Shooting method and device thereof
CN114416260A (en) * 2022-01-20 2022-04-29 北京字跳网络技术有限公司 Image processing method, image processing device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN109064390B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN109064390A (en) A kind of image processing method, image processing apparatus and mobile terminal
WO2020010979A1 (en) Method and apparatus for training model for recognizing key points of hand, and method and apparatus for recognizing key points of hand
CN109961406A (en) A kind of method, apparatus and terminal device of image procossing
US11256958B1 (en) Training with simulated images
CN108765278A (en) A kind of image processing method, mobile terminal and computer readable storage medium
CN111541907B (en) Article display method, apparatus, device and storage medium
CN107395958B (en) Image processing method and device, electronic equipment and storage medium
CN109086742A (en) scene recognition method, scene recognition device and mobile terminal
CN110175980A (en) Image definition recognition methods, image definition identification device and terminal device
CN108304075A (en) A kind of method and apparatus carrying out human-computer interaction in augmented reality equipment
CN103313080A (en) Control apparatus, electronic device, control method, and program
TW202038191A (en) Method, device and electronic equipment for living detection and storage medium thereof
WO2010064174A1 (en) Generation of a depth map
CN107944420A (en) The photo-irradiation treatment method and apparatus of facial image
CN104081307A (en) Image processing apparatus, image processing method, and program
CN110502974A (en) A kind of methods of exhibiting of video image, device, equipment and readable storage medium storing program for executing
CN109345553A (en) A kind of palm and its critical point detection method, apparatus and terminal device
CN111008935B (en) Face image enhancement method, device, system and storage medium
CN111047509A (en) Image special effect processing method and device and terminal
CN109816694A (en) Method for tracking target, device and electronic equipment
CN108764139A (en) A kind of method for detecting human face, mobile terminal and computer readable storage medium
CN106204746A (en) A kind of augmented reality system realizing 3D model live paint
CN106096043A (en) A kind of photographic method and mobile terminal
CN113052923B (en) Tone mapping method, tone mapping apparatus, electronic device, and storage medium
CN114092670A (en) Virtual reality display method, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant