CN106097429B - A kind of image processing method and device - Google Patents

A kind of image processing method and device Download PDF

Info

Publication number
CN106097429B
CN106097429B CN201610463857.6A CN201610463857A CN106097429B CN 106097429 B CN106097429 B CN 106097429B CN 201610463857 A CN201610463857 A CN 201610463857A CN 106097429 B CN106097429 B CN 106097429B
Authority
CN
China
Prior art keywords
image
pixel
character image
information
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610463857.6A
Other languages
Chinese (zh)
Other versions
CN106097429A (en
Inventor
曹文升
荆彦青
魏学峰
耿天平
张冲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201610463857.6A priority Critical patent/CN106097429B/en
Publication of CN106097429A publication Critical patent/CN106097429A/en
Application granted granted Critical
Publication of CN106097429B publication Critical patent/CN106097429B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention discloses a kind of image processing method and device;The embodiment of the present invention obtains request using receive information, then, the deep image information of personage is obtained according to the information acquisition request, the deep image information includes:The depth information of pixel in character image, the character image, obtain rendering position configured information, target in rendering position configured information instruction interactive interface renders region, according to depth information corresponding to the rendering position configured information and the pixel, render region in the target and the character image is rendered.

Description

A kind of image processing method and device
Technical field
The present invention relates to communication technical field, and in particular to a kind of image processing method and device.
Background technology
With the development of internet and the development of mobile communications network, nowadays, application program has become people's life Amusement and the indispensable part of communication exchange.
In application program running, application program generally provides user interface (UI), with realization and user mutual.At present There are some application programs to show virtual portrait image on a user interface, such as, game application can be on a user interface Show virtual portrait image.In the prior art, in order to improve Consumer's Experience, some application programs can be by the big of real person Profile is caused to place on a user interface;Specifically, character image is obtained, monochrome information and the people of personage are extracted from character image Thing profile, then, according to monochrome information, fixed position renders the profile of personage in the user interface.
In the research and practice process to prior art, it was found by the inventors of the present invention that in existing scheme, based on list Color information renders the profile of personage on a user interface, and therefore, image display effect is bad.
The content of the invention
The embodiment of the present invention provides a kind of image processing method and device, can improve image display effect.
The embodiment of the present invention provides a kind of image processing method, including:
Receive information obtains request;
The deep image information of acquisition request personage is obtained according to described information, the deep image information includes:Personage The depth information of pixel in image, the character image;
Rendering position configured information is obtained, the rendering position configured information indicates that the target in interactive interface renders area Domain;
According to depth information corresponding to the rendering position configured information and the pixel, region is rendered in the target The character image is rendered.
Accordingly, the embodiment of the present invention also provides a kind of image processing apparatus, including:
Receiving unit, obtain and ask for receive information;
First acquisition unit, for obtaining the deep image information of acquisition request personage, the depth according to described information Image information includes:The depth information of pixel in character image, the character image;
Second acquisition unit, for obtaining rendering position configured information, the rendering position configured information instruction interaction circle Target in face renders region;
Rendering unit, for the depth information according to corresponding to the rendering position configured information and the pixel, in institute State target and render region and the character image is rendered.
The embodiment of the present invention obtains request using receive information, then, the depth of personage is obtained according to the information acquisition request Image information is spent, the deep image information includes:The depth information of pixel, acquisition render in character image, the character image Position indication information, the rendering position configured information indicate that the target in interactive interface renders region, referred to according to the rendering position Depth information corresponding to showing information and the pixel, renders region in the target and the character image is rendered;The program can Character image is rendered in target area with the depth information based on image slices vegetarian refreshments, in terms of existing technologies, can To improve image display effect.
Brief description of the drawings
Technical scheme in order to illustrate the embodiments of the present invention more clearly, make required in being described below to embodiment Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present invention, for For those skilled in the art, on the premise of not paying creative work, it can also be obtained according to these accompanying drawings other attached Figure.
Fig. 1 a are the flow charts for the image processing method that the embodiment of the present invention one provides;
Fig. 1 b are that one kind that the embodiment of the present invention one provides cuts diagram intention;
Fig. 1 c are that the another kind that the embodiment of the present invention one provides cuts diagram intention;
Fig. 1 d are that another that the embodiment of the present invention one provides cuts diagram intention;
Fig. 2 a are a kind of flow charts for image processing method that the embodiment of the present invention two provides;
Fig. 2 b are the flow charts for the character image pretreatment that the embodiment of the present invention two provides;
Fig. 2 c are a kind of interface schematic diagrames that the embodiment of the present invention two provides;
Fig. 3 a are a kind of structural representations for image processing apparatus that the embodiment of the present invention three provides;
Fig. 3 b are the structural representations for another image processing apparatus that the embodiment of the present invention three provides;
Fig. 4 is a kind of structural representation for terminal that the embodiment of the present invention four provides.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, rather than whole embodiments.It is based on Embodiment in the present invention, the every other implementation that those skilled in the art are obtained under the premise of creative work is not made Example, belongs to the scope of protection of the invention.
The embodiment of the present invention provides a kind of image processing method and device.It is described in detail individually below.It should be noted It is that the numbering of following examples is not intended as the restriction to embodiment preferred sequence.
Embodiment one,
The present embodiment will be described from the angle of image processing apparatus, and the image processing apparatus can specifically be used as independent Entity realize, can also be integrated in other equipment, for example realized in terminal, the terminal can specifically include mobile phone, Tablet personal computer, notebook computer or personal computer (PC, Personal Computer), intelligent television, game cartridge etc. are set In standby.
A kind of image processing method, including:Obtained and asked using receive information, then, obtained according to the information acquisition request The deep image information of personage is taken, the deep image information includes:The depth letter of pixel in character image, the character image Breath, rendering position configured information is obtained, the rendering position configured information indicates that the target in interactive interface renders region, according to this Depth information corresponding to rendering position configured information and the pixel, render region in the target and wash with watercolours is carried out to the character image Dye.
As shown in Figure 1a, the idiographic flow of the image processing method can be as follows:
101st, receive information obtains request.
For example the information acquisition request of game engine transmission can be received in game station, etc..
102nd, the deep image information of personage is obtained according to the information acquisition request, wherein, the deep image information includes: The depth information of pixel in character image, the character image.
The deep image information can be the deep image information collected by depth camera, wherein, the depth camera Head can be body-sensing camera etc..
When obtaining the deep image information of personage, depth image letter can be specifically extracted from local storage unit Breath, such as, character image processing module can be extracted deeply according to information acquisition request from local storage unit in game station The deep image information that degree camera collects.
In practical application, before receive information obtains request, it can also establish and connect with depth camera, then, just Beginningization depth camera and the deep image information that personage is gathered by the depth camera, the deep image information that will be collected It is stored in corresponding memory cell.
The present embodiment deep image information can include the depth information of pixel in character image, character image, pixel Character recognition and label etc. corresponding to point.Wherein, the depth information can include colouring information and pixel pair corresponding to pixel The definition answered;The depth information can be generated by depth camera.
Definition corresponding to the pixel can be definition of the pixel relative to background image in the interactive interface, That is the degree of mixing of pixel and interactive interface background image.
103rd, rendering position configured information is obtained, the rendering position configured information indicates that the target in interactive interface renders area Domain.
Wherein, obtaining rendering position configured information can be generated based on the rendering position information that user preset is set, or Background image in interactive interface can also be based on to generate.
In the present embodiment, target renders region and can set according to the actual requirements, such as, can be game interaction interface Middle section, upper area etc..
104th, the depth information according to corresponding to the rendering position configured information and the pixel, region pair is rendered in the target The character image is rendered.
For example game engine can be according to depth information corresponding to the rendering position configured information and the pixel, at this Target renders region and the character image is rendered.
Alternatively, it is contemplated that locus where user, not necessarily in rational scope, for example, user position from Camera is excessively near or the camera elevation angle is too high, and user will be caused to only have above the waist in the coverage of camera, if should Image is directly rendered in interactive interface, and the display effect of image can be caused bad;Therefore, in order to lifted image display effect with And terminal resource is saved, the present embodiment can be when some position of user be in character image, to the depth information of pixel It is adjusted;Namely step " according to depth information corresponding to the rendering position configured information and the pixel, renders in the target Region renders to the character image " it can include:
Determine the first object position of the personage whether in the character image;
If it is not, then the depth information of pixel in the character image is adjusted, to be adjusted rear depth information;
Depth information after the adjustment according to corresponding to the rendering position configured information and the pixel, area is rendered in the target Domain renders to the character image.
Wherein, first object position can be set according to the actual requirements, for example can be head, foot, shoulder etc..
For example in the case where depth information includes definition corresponding to pixel, it can not exist in first object part When in character image, the definition of pixel is adjusted and (such as turns down definition);Namely step is " to picture in the character image The depth information of vegetarian refreshments is adjusted " it can include:The definition of pixel in the character image is adjusted.
For example, working as the foot for determining personage or foot not in character image, then pixel in character image is reduced Definition, now, the image of generation is rendered in interactive interface will become unintelligible, so as to remind its current institute of user The position at place is irrational, guides user's correction position.
Wherein it is determined that the target site of personage whether the mode in character image can have it is a variety of, such as, it is true to improve Fixed accuracy, can be determined based on the bone coordinate of personage, namely step " determine personage first object position whether The character image " can include:
Obtain the bone coordinate of the personage, and by the bone Coordinate Conversion into image coordinate;
Obtain image coordinate point set corresponding to first object position;
Judge that image coordinate point is whether in preset coordinate region in the image coordinate point set;
If, it is determined that the first object position of the personage is in the character image;
If not, it is determined that the first object position of the personage is not in the character image.
The bone coordinate of the personage can be gathered by depth camera, such as by the infrared sensor in depth camera come Collection, when obtaining bone coordinate, it can be obtained from local storage unit.Because bone coordinate is generally three-dimensional location coordinates, Therefore, for convenience, the image coordinate that the present embodiment can be by bone Coordinate Conversion into two dimension, then, judges first object portion Whether image coordinate point set positioned at preset coordinate region both can determine that whether first object position is located at figure map corresponding to position As in.
Wherein, the image coordinate point setting that preset coordinate region can be based on character image, such as, preset corresponding to foot Coordinates regional can be coordinates regional corresponding to the bottom coordinates regional or character image of character image, pre- corresponding to head If coordinates regional can be the top coordinates regional of character image, or coordinates regional etc. corresponding to character image.
For example, image coordinate point set corresponding to foot or head can be obtained, so after image coordinate is changed Afterwards, judge whether all image coordinate points are respectively positioned on preset coordinate region in image coordinate point set, if, it is determined that foot or Person head is located in character image, otherwise determines that foot or head are not located in character image.
Alternatively, in actual applications, determine that image coordinate point, can be based on coordinate points whether in coordinates regional Coordinate value determines, such as, it may be determined that whether the coordinate value of some image coordinate point is sat positioned at corresponding to preset coordinate region In the range of mark, if, it is determined that the image coordinate point is located at the preset coordinate region, namely step " judges the image coordinate point Whether image coordinate point is in preset coordinate region in set " it can include:
Judge the coordinate value of image coordinate point in the image coordinate point set whether in seat corresponding to preset coordinate region In the range of scale value, if so, then process decision chart as in coordinate point set image coordinate point in preset coordinate region, otherwise, it is determined that Image coordinate point inequality is in preset coordinate region in image coordinate point set.Wherein, the coordinate value of image coordinate point can be sat with X Scale value and/or Y-coordinate value, it can specifically be set according to actual conditions, such as, when first object position is foot, the image The coordinate value of coordinate can be with Y-coordinate value etc..
In order to accelerate to determine first object position whether be character image speed, the present embodiment can be big according to coordinate value A target image coordinate points are chosen in the small point set from image coordinate, when judging the target image coordinate points in preset coordinate area In domain, then it is defaulted as all coordinate points in image coordinate point set and is respectively positioned on preset coordinate region, such as, when first object position For foot when, can choose that Y value in coordinate point set is maximum or the seat in minimum (depending on visible image coordinate system direction and origin) It is minimum can to choose Y value in coordinate point set as target image coordinate points, or when first object position is head for punctuate Or the coordinate points of maximum (opposite with foot coordinate value) are as target image coordinate points;Namely step " judges the image coordinate Whether image coordinate point is in preset coordinate region in point set " it can include:
According to coordinate value size corresponding to image coordinate point in the image coordinate point set, from the image coordinate point set Choose target image coordinate points;
Judge the coordinate value of the target image coordinate points whether corresponding to preset coordinate region in the range of coordinate value;
If so, then judge that image coordinate point is in preset coordinate region in the image coordinate point set;
If it is not, then judge that image coordinate point inequality is in preset coordinate region in the image coordinate point set.
The accuracy for improving depth information adjustment is optionally, the present embodiment can be based on image coordinate point and preset coordinate Offset information between region is adjusted to depth information (such as definition), namely step is " to pixel in the character image The depth information of point is adjusted " it can include:
Obtain the offset information between image coordinate point and the preset coordinate region in the image coordinate point set;
The depth information of pixel in the character image is adjusted according to the offset information.
For example definition corresponding to pixel in the character image can be adjusted according to offset information, specifically, Corresponding target sharpness can be obtained according to offset information, then, according to the target sharpness to pixel in the character image Definition corresponding to point is adjusted.
Wherein, offset information can include offset direction, offset distance etc., wherein, offset distance can be by coordinate points Coordinate value coordinate value scope corresponding with preset coordinate region between difference obtain.
The present embodiment can obtain each offset information between coordinate points and preset coordinate region in coordinate set, also may be used To obtain the offset information in coordinate set between some coordinate points and preset coordinate region;For example chosen in coordinate set One coordinate points, obtains the offset information between the coordinate points and preset coordinate region, and for example, when according to coordinate value selection mesh Coordinate points are marked, and in the case of determining coordinate set whether in preset coordinate region using target image coordinate points, can be obtained The offset information between target image coordinate points and preset coordinate region, namely step is taken " to obtain in the image coordinate point set Offset information between image coordinate point and the preset coordinate region " can include:
Obtain the offset information between target image coordinate points and preset coordinate region.
(it is in preset coordinate regional center coordinate points or boundary coordinate point for example, obtaining target image coordinate points The coordinate points of preset coordinate zone boundary) between offset distance and offset direction.
Target sharpness can be obtained according to offset information in the present embodiment, alternatively, consider user in actual applications Bone coordinate be also possible to certain error be present in itself, may result according to offset information obtain target sharpness width Spend bigger, but the locus of user has very strong continuity, and definition values are typically within the scope of one, if now adjusting Whole definition, then occur that image clearly jump is unstable, image display effect is bad;The present embodiment is raising definition Stability and image display effect, LPF can be carried out to definition;Namely step is " according to the offset information to the personage Definition is adjusted corresponding to pixel in image " it can include:
Corresponding target sharpness is obtained according to the offset information;
Judge the target sharpness whether in preset threshold range;
If so, then definition corresponding to pixel in the character image is adjusted according to the target sharpness;
If it is not, it can then filter out the target sharpness.
Alternatively, in actual applications, when user is near the border of zone of reasonableness, due to bone coordinate Shake can cause erroneous judgement target site (such as foot or head) not in character image, and therefore, the present embodiment can be based on mesh Marking continuous judgement number of the position not in character image avoids such situation from occurring, to improve the accuracy determined,;Namely After determining first object position not in the character image, it is adjusted to the depth information of pixel in the character image Before;The present embodiment method can also include:
Obtain the current first object position and do not continuously determine number in the character image;
Judge that this continuously determines whether number is more than preset times;
If so, then perform the step of being adjusted to the depth information of pixel in the character image.
Wherein, preset times can be set according to actual conditions, such as, can be 5,6,7 etc..
For example, it is determined that foot not in character image after, can obtain when company of the forefoot not in character image Continuous determined number, if the number is more than preset times, really shows foot not in character image, at this point it is possible to personage The depth information of pixel is adjusted in image, and such as the definition of pixel is adjusted.
In the present embodiment, for terminal, the rational position scope of user can according to the target site of personage whether Defined in character image, such as, can define rational position scope is:The first object position of personage and the second target portion In character image, common rational position scope is for position:The head of personage in the picture, the foot of personage also in the picture Or in image base region;Therefore, the present embodiment method it is determined that first object position in character image after, it is also necessary to The second target site is determined whether in character image, if not existing, needs to be adjusted depth information, with remind user and Save terminal resource;Namely in step " according to depth information corresponding to the rendering position configured information and the pixel, in the mesh Mark renders region and the character image is rendered " it can also include:
When it is determined that the first object position of the personage is in the character image, determining the second target site of the personage is It is no in the character image;
If it is not, then the depth information of pixel in the character image is adjusted, to be adjusted rear depth information;
Depth information after the adjustment according to corresponding to the rendering position configured information and the pixel, area is rendered in the target Domain renders to the character image.
Wherein it is determined that the second target site whether the mode in character image and above-mentioned determination with target site whether Mode in character image is similar, can be based on image coordinate point corresponding to the second target site and corresponding preset coordinate region To determine, the description above may be referred to;In addition, the mode being adjusted to depth information can also be believed depth with reference to above-mentioned The process being adjusted is ceased, such as, offset information can be obtained, then according to offset information come (such as clear to depth information Degree) adjustment, further, it is also possible to number is continuously determined to avoid erroneous judgement not in character image based on the second target site, LPF can also be carried out to definition, detailed process may be referred to the description above, not repeat herein.
Alternatively, because character recognition and label corresponding to pixel in deep image information can have certain error, this will Cause character image cavity occur, reduce image display effect;In order to eliminate image cavity, lifting image display effect, this reality Applying a method can be pre-processed with character recognition and label and definition, namely the deep image information also includes:The pixel Corresponding character recognition and label, the depth information include:In the case of definition corresponding to the pixel, deep image information is being obtained Afterwards, before being rendered to the character image, the present embodiment method also includes:
Character image region is determined in the character image;
According to character recognition and label corresponding to pixel in the character image region, personage's mark is determined in the character image region Know the interruption pixel of interruption;
Regenerate the character recognition and label of the interruption pixel;
Definition scope is preset according to this to be adjusted definition corresponding to pixel in the character image region.
Alternatively, step is " according to character recognition and label corresponding to pixel in the character image region, in the character image region The middle interruption pixel for determining character recognition and label interruption " can include:
Character recognition and label corresponding to per a line pixel or each row pixel is scanned in the character image region successively, with Obtain identifying scanning result;
Determine that mark is interrupted and not in the character image zone boundary in character image region according to the mark scanning result Interruption pixel.
For example character recognition and label corresponding to every a line pixel in character image region can be progressively scanned, then, based on this Scanning result determines that mark is interrupted and not in the interruption pixel of the character image zone boundary in character image region.
Wherein, definition scope being preset corresponding to character recognition and label can set according to the actual requirements, such as, can be 0- 150th, 0-255 etc..Step " presets definition scope, to the character image according to corresponding to the character recognition and label in the present embodiment Definition is adjusted corresponding to pixel in region " it can include:
Preset from this and target sharpness is chosen in the range of definition;
Definition corresponding to pixel in character image region is adjusted according to the target sharpness.
For example, when default definition scope is 0-155, definition corresponding to pixel in character image can be set For 148.It should be understood that:In the present embodiment in character image region definition corresponding to different pixels point can with identical, Can be different, it can specifically be set according to actual conditions.
Alternatively, there is the degree of sawtooth to mitigate character image edge, lift display effect, the present embodiment method is in root Preset according to this after definition scope is adjusted to definition corresponding to pixel in the character image region, to the figure map As that before being rendered, can also include:
Determine the boundary pixel point for being located at zone boundary in the character image region;
It is currently right according to definition corresponding to background image region pixel in the character image and the boundary pixel point The definition answered, obtain corresponding definition scope;
Definition corresponding to the boundary pixel point is adjusted according to the definition scope.
For example definition corresponding to background image region pixel is 0, the current definition of boundary pixel point is 188, this When, the definition scope of acquisition is 0-188, it is then possible to which definition values corresponding to boundary pixel point are arranged in 0-188 Some value, with the edge in smooth character image region, lift display effect.Wherein different boundary pixel could be arranged to phase Same definition, it can also be provided that different definition.
Alternatively, in order to further lift display effect, per a line pixel in the character image region is scanned successively In the case of corresponding character recognition and label, the present embodiment method according to the definition scope to clear corresponding to the boundary pixel point After degree is adjusted, before progress character image renders, it can also include:
The character recognition and label of each row pixel in the character image region is scanned successively, to obtain identifying scanning result;
Determine that mark is interrupted and not in the character image zone boundary in character image region according to the mark scanning result Interruption pixel;
Definition scope is preset according to corresponding to the character recognition and label, to clear corresponding to pixel in the character image region Degree is adjusted.
For example interruption pixel can be determined with the character recognition and label of every a line pixel in progressive scanning picture region, then, The definition of interruption pixel and edge pixel point is adjusted, subsequently, scans by column each row pixel of image-region The character recognition and label of point determines to be interrupted pixel again, the definition of interruption pixel and edge pixel point is adjusted again; Wherein, the mode being adjusted again to the definition of interruption pixel and edge pixel point is identical with adjustment mode before, Here is omitted.
Alternatively, in actual conditions, the image resolution ratio that camera provides is commonly greater than the image point that personage can reach Resolution, now, it is invalid to have one part of pixel, therefore wastes terminal system resource;Such as the figure that camera generally provides As resolution ratio is 640*480, and when user's whole body is in the range of camera, and when deploying both arms, general adult also can only Reach 480*480, at least 120*480 pixel is invalid;In order to save terminal system resource, the present embodiment can be to people Object image is carried out cutting figure processing, for example the character image that resolution ratio is 480*480 is cut out from character image, namely the present embodiment " the depth information after the adjustment according to corresponding to the rendering position configured information and the pixel, in the target wash with watercolours of step in method Dye region renders to the character image " it can include:
Obtain position of the personage in the character image;
The character image is carried out according to the position and pre-set image resolution ratio to cut figure processing, to obtain the pre-set image point The target person image of resolution;
Depth information after the adjustment according to corresponding to pixel in the rendering position configured information and the target person image, Region is rendered in the target to render the target person image.
Specifically, step " being carried out cutting figure processing to the character image according to the position and pre-set image resolution ratio " can wrap Include:
Determined to cut figure position according to the position;
Figure position is cut according to this and pre-set image is differentiated and the character image is carried out to cut figure processing.
The present embodiment can dynamically track the position of user, according to real time position of the user in character image come dynamic Figure position is cut in adjustment, so as to both ensure that the scope of activities of user, can be avoided handling invalid pixel again, be saved terminal system Resource.
, can be to scheme with reference to figure 1b for example, when pre-set image resolution ratio is 480*480, and personage is located among image Inconocenter point cuts figure point and respectively takes 240 pixels up and down, realize and 480* is cut out from character image to cut figure point at this 480 target person image;In another example when pre-set image resolution ratio is 480*480, and personage is located at the image left side or the right When with reference to figure 1c and 1d, determine to cut figure point accordingly on the character image left side or the right, cut figure point at this and respectively take up and down 240 pixels, realize the target person image that 480*480 is cut out from character image.
From the foregoing, it will be observed that the embodiment of the present invention obtains request using receive information, then, obtained according to the information acquisition request The deep image information of personage, the deep image information include:The depth information of pixel in character image, the character image, Rendering position configured information is obtained, the rendering position configured information indicates that the target in interactive interface renders region, according to the wash with watercolours Depth information corresponding to contaminating position indication information and the pixel, renders region in the target and the character image is rendered; The program can be rendered based on the depth information of image slices vegetarian refreshments in target area to character image, face it to simulate class origin The sensation in border, in terms of existing technologies, image display effect can be improved, improve interacting between user and terminal Property.
Embodiment two,
According to the method described by embodiment one, citing is described in further detail below.
In the present embodiment, will be illustrated so that the image processing apparatus specifically integrates in the terminal as an example.
Wherein, image processing apparatus be integrated in terminal mode have it is a variety of, such as, terminal can be integrated in a software form In.
As shown in Figure 2 a, a kind of image processing method, idiographic flow can be as follows:
201st, terminal receive information obtains request, and the deep image information of personage is obtained according to information acquisition request.
For example specifically deep image information can be obtained from the memory cell of terminal local according to information acquisition request.
Wherein, deep image information can include the depth information of pixel in character image, character image, pixel pair Character recognition and label answered etc..Wherein, the depth information can be included corresponding to colouring information corresponding to pixel and pixel Definition;The depth information can be generated by depth camera.
Definition corresponding to the pixel can be definition of the pixel relative to background image in the interactive interface, That is the degree of mixing of pixel and interactive interface background image;
Deep image information can be that the depth image collected by the depth camera of terminal connection is believed in the present embodiment Breath, wherein, the depth camera can be body-sensing camera etc..
202nd, terminal-pair character image is pre-processed, with the character image after being handled.
Specifically, there is serious sawtooth in order to avoid cavity and image border occurs in character image, and then lift figure As display effect, the present embodiment can detect to identify the interruption pixel of interruption in character image, regenerate interruption pixel The character recognition and label of point and the definition of adjustment interruption pixel, in addition, this implementation can also adjust personage's edge pixel point Definition, with smooth personage edge;Namely with reference to figure 2b, the process that terminal-pair character image is pre-processed can include:
2021, character image region is determined in the character image.
For example character image region is determined in character image according to character contour information, the character image region includes Personage.
2022, according to character recognition and label corresponding to pixel in the character image region, determined in the character image region Identify the interruption pixel of interruption.
Specifically, can scan successively in the character image region per a line pixel or corresponding to each row pixel Character recognition and label, to obtain identifying scanning result;
Determined according to the mark scanning result per a line or mark is interrupted and not in the figure map in each row pixel As the interruption pixel of zone boundary.
For example character recognition and label corresponding to every a line pixel in character image region can be progressively scanned, then, based on this Scanning result is determined per mark interruption in a line pixel and not in the interruption pixel of the character image zone boundary.
2023, regenerate the character recognition and label of the interruption pixel.
Such as if personage ID corresponding to interruption pixel script is 0, at this point it is possible to set corresponding to interruption pixel Personage ID is 1, and the wherein ID of pixel is 0 to represent background image pixels point, and the ID of pixel is 1 to represent the pixel as personage Pixel.
2024, definition scope is preset according to corresponding to the character recognition and label, it is corresponding to pixel in the character image region Definition be adjusted.
Wherein, definition scope being preset corresponding to character recognition and label can set according to the actual requirements, such as, can be 0- 150th, 0-255 etc..
Specifically, adjustment process can include:
Preset from this and target sharpness is chosen in the range of definition;
Definition corresponding to pixel in character image region is adjusted according to the target sharpness.
For example, when default definition scope is 0-159, definition corresponding to pixel in character image can be set For 120.Wherein, target sharpness choose mode can be such as random or chosen according to certain rule with a variety of.
Definition corresponding to different pixels point can be with identical in character image region in the present embodiment, can also be different, tool Body can be set according to actual conditions.
2025, determine the boundary pixel point for being located at zone boundary in the character image region.
Such as, it may be determined that per a line, either the starting pixels point of each row or end pixel point are in character image region Boundary pixel point.
2026, worked as according to definition corresponding to background image region pixel in the character image and the boundary pixel point Definition corresponding to preceding, obtains corresponding definition scope.
The present embodiment can determine background image region in character image, then, obtain background image region pixel Corresponding definition, and the current corresponding definition of boundary pixel point, a definition model is formed according to the two definition Enclose;For example definition corresponding to background image region pixel is 7, the current definition of boundary pixel point is 198, now, is obtained The definition scope taken is 7-198.
2027, definition corresponding to the boundary pixel point is adjusted according to the definition scope.
For example definition of the definition as boundary pixel point can be chosen from definition scope, definition choosing Take mode such as random or chosen according to certain rule with a variety of.Wherein, definition corresponding to different boundary pixel can , can also be inconsistent with consistent.
Alternatively, in order to further lift display effect, scan successively in the character image region per a line pixel pair The character recognition and label answered, and after being adjusted to the definition of disconnected pixel among every a line, the present embodiment method can also be according to Character recognition and label corresponding to each row pixel in secondary scanning character image region, to obtain being interrupted pixel, then, again pair between The definition of disconnected pixel and edge pixel point is adjusted, namely the process that is pre-processed of terminal-pair character image can be with Including:
The character recognition and label of each row pixel in the character image region is scanned successively, to obtain identifying scanning result;
Determine that mark is interrupted and not in the character image zone boundary in each row pixel according to the mark scanning result Interruption pixel;
Definition scope is preset according to corresponding to the character recognition and label, to clear corresponding to pixel in the character image region Degree is adjusted;
Determine again in each row pixel boundary pixel point (starting pixels point in such as each row pixel and/or End pixel point),
It is currently right according to definition corresponding to background image region pixel in the character image and the boundary pixel point The definition answered, obtain corresponding definition scope;
Definition corresponding to the boundary pixel point is adjusted according to the definition scope.
203rd, terminal obtains the bone coordinate of personage, and by the bone Coordinate Conversion into image coordinate.
The bone coordinate of the personage can be gathered by depth camera, such as by the infrared sensor in depth camera come Collection, when obtaining bone coordinate, it can be obtained from local storage unit.Because bone coordinate is generally three-dimensional location coordinates, Therefore, for convenience, the image coordinate that the present embodiment can be by bone Coordinate Conversion into two dimension.
204th, terminal judges the first image coordinate point set corresponding to personage foot whether in the first preset coordinate region, If so, step 204 is performed, if it is not, then performing step 206.
The present embodiment considers locus where user, and not necessarily in the range of rational position, such as user institute is in place Put excessively near from camera or the camera elevation angle is too high, user will be caused to only have above the waist in the coverage of camera, if The image is directly rendered in interactive interface, the display effect of image can be caused bad;Therefore, imitated to lift image display Fruit and saving terminal resource, the present embodiment can be in locus where user not in the range of rational position, to pixel Depth information (such as definition) be adjusted, to remind user that it is not in rational position scope.
In the present embodiment, the rational position scope of user can according to the target site of personage whether in character image come Definition;For example rational position scope can be defined and be:The head of personage in the picture, the foot of personage also in the picture or Image base region;Therefore, current embodiment require that whether in the picture foot and head are determined, to determine the space where user Whether position is in the range of rational position.
In order to accelerate to judge speed, the present embodiment can according to the coordinate size of image coordinate point in image coordinate set from A target image coordinate points are chosen in the set, when judging that the target image coordinate points in preset coordinate region, then give tacit consent to Preset coordinate region is respectively positioned on for all coordinate points in image coordinate point set;Such as can be from the first image coordinate point set The middle coordinate points for choosing Y value maximum or minimum (depending on visible image coordinate system direction and origin) are as target image coordinate points; When the coordinate points are located in the first preset coordinate region, it is determined that the first image coordinate point set corresponding to foot is located at first Preset coordinate region, namely personage foot in the picture;Therefore, " terminal judges the first image corresponding to personage foot to step Whether coordinate point set is in the first preset coordinate region " it can include:
Terminal coordinate value size according to corresponding to image coordinate point in the first image coordinate point set, from first image First object image coordinate point is chosen in coordinate point set;
Whether terminal judges the coordinate value of the first object image coordinate point first corresponding to the first preset coordinate region In the range of coordinate value;
If so, then judge that the first image coordinate point set is located in the first preset coordinate region;
If it is not, then judge the first image coordinate point set not in the first preset coordinate region.
Wherein, the image coordinate point setting that the first preset coordinate region can be based on character image, such as, corresponding to foot First preset coordinate region can be bottom coordinates regional of character image etc..
Alternatively, in actual applications, when user is near the border of zone of reasonableness, due to bone coordinate Shake can cause erroneous judgement target site (such as foot or head) not in character image, and therefore, the present embodiment can be based on portion Continuous judgement number of the position not in character image avoids such situation from occurring, and improves the accuracy of judgement, for example judging When being as a result more than certain value for continuous judgement number of the target site not in character image, just determine the target site not in people In object image;Namely it is determined that the first image coordinate point set not in first predeterminable area after, in the character image The depth information of pixel be adjusted before (i.e. between step 204 and 206), the present embodiment method can also include:
Terminal obtains continuous judgement number of the first image coordinate point set not in the first preset coordinate region;
Terminal judges that this continuous judges whether number is more than preset times;
If so, then perform step 206.
205th, terminal judges the second image coordinate point set corresponding to personage head whether in the second preset coordinate region, If it is not, step 207 is then performed, if so, then performing step 209.
Equally, to accelerate to judge speed, terminal judges whether image coordinate point set corresponding to personage head sits default When marking in region, a target image coordinate points can also be chosen from image coordinate point set according to coordinate value size, when sentencing Break the target image coordinate points in preset coordinate region, then be defaulted as all coordinate points in image coordinate point set be respectively positioned on it is pre- If coordinates regional;For example Y value minimum or maximum (mesh corresponding with foot can be chosen from the second image coordinate point set Mark coordinate points coordinate value it is opposite) coordinate points as target image coordinate points;Namely " terminal judges that personage head is corresponding to deficiency The second image coordinate point set whether in the second preset coordinate region " can include:
Terminal coordinate value size according to corresponding to image coordinate point in the second image coordinate point set, from second image The second target image coordinate points are chosen in coordinate point set;
Whether terminal judges the coordinate value of the second target image coordinate points second corresponding to the second preset coordinate region In the range of coordinate value;
If so, then judge that the second image coordinate point set is located in the second preset coordinate region;
If it is not, then judge the second image coordinate point set not in the second preset coordinate region.
Wherein, the image coordinate point setting that the second preset coordinate region can be based on character image, such as, corresponding to head Second preset coordinate region can be coordinates regional etc. corresponding to the top coordinates regional or whole character image of character image Deng.
Alternatively, in actual applications, when user is near the border of zone of reasonableness, due to bone coordinate Shake can cause erroneous judgement target site (such as foot or head) not in character image, and therefore, the present embodiment can be based on portion Continuous judgement number of the position not in character image avoids such situation from occurring, and improves the accuracy of judgement, for example judging When being as a result more than certain value for continuous judgement number of the target site not in character image, just determine the target site not in people In object image;Namely it is determined that the second image coordinate point set not in second predeterminable area after, in the character image The depth information of pixel be adjusted before (i.e. between step 205 and 207), the present embodiment method can also include:
Terminal obtains continuous judgement number of the second image coordinate point set not in the second preset coordinate region;
Terminal judges that this continuous judges whether number is more than preset times;
If so, then perform step 207.
206th, terminal obtains the skew between image coordinate point and the first preset coordinate region in the first image coordinate set Information, and the definition of pixel in the character image is adjusted according to the offset information, believed with being adjusted rear depth Breath, goes to step 208.
Wherein, offset information can include offset direction, offset distance etc., and the offset distance can be by the seat of coordinate points Difference between scale value coordinate value scope corresponding with preset coordinate region obtains.
Specifically, step " terminal obtain in the first image coordinate set image coordinate point and the first preset coordinate region it Between offset information " can include:Terminal obtains the skew between first object image coordinate point and the first preset coordinate region Information.
For example, obtain first object image coordinate point and the first preset coordinate regional center coordinate points or boundary coordinate point Offset distance and offset direction between (coordinate points i.e. in the first preset coordinate zone boundary) etc..
Specifically, the present embodiment has a variety of according to the mode of offset information adjustment definition, such as, it can be believed according to skew Breath obtains corresponding target sharpness, and then, the definition of pixel in the character image is carried out according to the target sharpness Adjustment.Wherein, different pixels point can be adjusted to identical definition in character image, can also be adjusted to different definition.
Alternatively, in order to improve the stability of definition and image display effect, LPF can be carried out to definition, Namely step " being adjusted according to the offset information to definition corresponding to pixel in the character image " can include:
Corresponding target sharpness is obtained according to the offset information;
Judge the target sharpness whether in preset threshold range;
If so, then definition corresponding to pixel in the character image is adjusted according to the target sharpness;
If it is not, it can then filter out the target sharpness.
207th, terminal obtains the skew between image coordinate point and the second preset coordinate region in the second image coordinate set Information, and the definition of pixel in the character image is adjusted according to the offset information, believed with being adjusted rear depth Breath, goes to step 208.
Wherein, offset information can include offset direction, offset distance etc., and the offset distance can be by the seat of coordinate points Difference between scale value coordinate value scope corresponding with preset coordinate region obtains.
Specifically, step " terminal obtain in the second image coordinate set image coordinate point and the first preset coordinate region it Between offset information " can include:Terminal obtains the skew between the second target image coordinate points and the second preset coordinate region Information.
For example, obtain the second target image coordinate points and the second preset coordinate regional center coordinate points or boundary coordinate point Offset distance and offset direction between (coordinate points i.e. in the second preset coordinate zone boundary) etc..
Specifically, the present embodiment has a variety of according to the mode of offset information adjustment definition, such as, it can be believed according to skew Breath obtains corresponding target sharpness, and then, the definition of pixel in the character image is carried out according to the target sharpness Adjustment.Wherein, different pixels point can be adjusted to identical definition in character image, can also be adjusted to different definition.
Equally, in order to improve the stability of definition and image display effect, LPF can be carried out to definition, than Such as, it is clear according to the corresponding target of offset information acquisition between the second target image coordinate points and the second preset coordinate region Degree, then, judges the target sharpness whether in preset threshold range, if it is not, then filtering out, if so, then clear according to target Clear degree percentage regulation information.
208th, position of the terminal according to acquisition personage in the character image, and according to the position and pre-set image resolution ratio The character image is carried out to cut figure processing, to obtain the target person image of the pre-set image resolution ratio.
In a practical situation, the image resolution ratio that camera provides commonly greater than the image resolution ratio that personage can reach, Now, it is invalid to have one part of pixel, therefore wastes terminal system resource;Such as the image resolution that camera generally provides Rate is 640*480, and when user's whole body is in the range of camera, and when deploying both arms, general adult also can only achieve 480*480, at least 120*480 pixel are invalid;In order to save terminal system resource, the present embodiment can be to figure map As carrying out cutting figure processing, it is contemplated that if cutting figure in fixed position (such as heart point or so respectively takes 240 in the picture), quite Reduced by force in by the scope of camera, more reduce the scope of activities of user, therefore, the present embodiment can be existed based on personage Position in image carries out cutting figure;Namely step " is cut according to the position and pre-set image resolution ratio to the character image Figure processing " can include:
Determined to cut figure position according to the position;
Figure position is cut according to this and pre-set image is differentiated and the character image is carried out to cut figure processing.
The present embodiment can dynamically track the position of user, according to real time position of the user in character image come dynamic Figure position is cut in adjustment, so as to both ensure that the scope of activities of user, can be avoided handling invalid pixel again, be saved terminal system Resource.
For example, when pre-set image resolution ratio is 580*580, and personage is located among image, can be with image center To cut figure point, cut figure point at this and respectively take 290 pixels up and down, realize the target that 580*580 is cut out from character image Character image;In another example when pre-set image resolution ratio is 380*380, and personage is located at the image left side or the right, in personage The image left side or the right determine to cut figure point accordingly, cuts figure point at this and respectively takes 190 pixels up and down, realizes from personage 380*380 target person image is cut out in image.
209th, terminal depth information according to corresponding to pixel in rendering position configured information and the target person image, Region is rendered in target to render the target person image.
Wherein, the target in rendering position configured information instruction interactive interface renders region, and the target is selected into region can be with Set according to the actual requirements, such as, can be middle section of interactive interface etc.;, can be in the virtual field of game with reference to figure 2c Scape intermediate region renders character image.
The depth information can include:Colouring information corresponding to pixel and definition;Now, terminal can be according to rendering Colouring information and definition corresponding to pixel in position indication information, target person image, region is rendered to mesh in target Mark character image is rendered.
From the foregoing, it will be observed that the embodiment of the present invention obtains request using receive information, then, obtained according to the information acquisition request The deep image information of personage, the deep image information include:The depth information of pixel in character image, the character image, Rendering position configured information is obtained, the rendering position configured information indicates that the target in interactive interface renders region, according to the wash with watercolours Depth information corresponding to contaminating position indication information and the pixel, renders region in the target and the character image is rendered; The program can be rendered based on the depth information of image slices vegetarian refreshments in target area to character image, face it to simulate class origin The sensation in border, in terms of existing technologies, image display effect can be improved, improve interacting between user and terminal Property.
Embodiment three,
In order to preferably implement above method, the embodiment of the present invention also provides a kind of image processing apparatus, as shown in Figure 3 a, The image processing apparatus can include receiving unit 301, first acquisition unit 302, second acquisition unit 303 and rendering unit 304, it is as follows:
(1) receiving unit 301;
Receiving unit 301, obtain and ask for receive information.
(2) first acquisition unit 302;
First acquisition unit 302, for obtaining the deep image information of personage, the depth map according to the information acquisition request As information includes:The depth information of pixel in character image, the character image.
For example, first acquisition unit 302 can obtain deep image information from local storage unit.
The deep image information can be the deep image information collected by depth camera, wherein, the depth camera Head can be body-sensing camera etc..
The present embodiment deep image information can include the depth information of pixel in character image, character image, pixel Character recognition and label etc. corresponding to point.Wherein, the depth information can include colouring information and pixel pair corresponding to pixel The definition answered.
(3) second acquisition unit 303;
Second acquisition unit 303, for obtaining rendering position configured information, rendering position configured information instruction interaction circle Target in face renders region.
Wherein, obtaining rendering position configured information can be generated based on the rendering position information that user preset is set, or Background image in interactive interface can also be based on to generate.
In the present embodiment, target renders region and can set according to the actual requirements, such as, can be game interaction interface Middle section, upper area etc.
(4) rendering unit 304;
Rendering unit 304, for the depth information according to corresponding to the rendering position configured information and the pixel, in the mesh Mark renders region and the character image is rendered.
Specifically, the rendering unit 304 can include:First determination subelement, adjust subelement and render subelement;
First determination subelement, for determining the first object position of the personage whether in the character image;
The adjustment subelement, for it is determined that first object position not in the character image, to picture in the character image The depth information of vegetarian refreshments is adjusted, to be adjusted rear depth information;
This renders subelement, believes for depth after the adjustment according to corresponding to the rendering position configured information and the pixel Breath, renders region in the target and the character image is rendered.
Such as first determination subelement, it is specifically used for:
Obtain the bone coordinate of the personage, and by the bone Coordinate Conversion into image coordinate;
Obtain image coordinate point set corresponding to first object position;
Judge that image coordinate point is whether in preset coordinate region in the image coordinate point set;
If, it is determined that the first object position of the personage is in the character image;
If not, it is determined that the first object position of the personage is not in the character image.
Specifically, the first determination subelement, can be specifically used for:
According to coordinate value size corresponding to image coordinate point in the image coordinate point set, from the image coordinate point set Choose target image coordinate points;
Judge the coordinate value of the target image coordinate points whether corresponding to preset coordinate region in the range of coordinate value;
If so, then judge that image coordinate point is in preset coordinate region in the image coordinate point set;
If it is not, then judge that image coordinate point inequality is in preset coordinate region in the image coordinate point set
The present embodiment adjusts subelement, can be specifically used for:
Obtain the offset information between image coordinate point and the preset coordinate region in the image coordinate point set;
The depth information of pixel in the character image is adjusted according to the offset information.
In the present embodiment, the depth information includes:Definition corresponding to pixel;The definition be the pixel relative to The definition of background image in the interactive interface;
Now, the adjustment subelement, can be specifically used for corresponding to pixel in the character image according to the offset information Definition be adjusted.
For example in order to improve the stability of image definition and display effect, the adjustment subelement, it can be used for:
Corresponding target sharpness is obtained according to the offset information;
Judge the target sharpness whether in preset threshold range;
If so, then definition corresponding to pixel in the character image is adjusted according to the target sharpness.
Alternatively, to improve the stability and display effect of image definition, the present embodiment can be carried out to definition Rendering unit 304 can also include in LPF, namely the present embodiment:Number judgment sub-unit;
The number judgment sub-unit, for determining first object position not in the character image in the first determination subelement Afterwards, before adjustment subelement is adjusted to the depth information of pixel in the character image, the current first object is obtained Position does not continuously determine number in the character image, judges that this continuously determines whether number is more than preset times;
The adjustment subelement, specifically for when the number judgment sub-unit is judged as YES, to pixel in the character image The depth information of point is adjusted.
Alternatively, rendering unit 304 also includes in the present embodiment:Second determination subelement;
Second determination subelement, for determining the first object position of the personage in the personage in the first determination subelement When in image, determine the second target site of the personage whether in the character image;
Now, the adjustment subelement, it is additionally operable to determine second target site not in personage in second determination subelement When in image, the depth information of pixel in the character image is adjusted, to be adjusted rear depth information.
Alternatively, it is the cavity in blank map picture, improves image display effect, the present embodiment can carries out pre- to figure map Processing, specifically, also includes in the deep image information:Character recognition and label corresponding to the pixel, the depth information include:The picture In the case of definition corresponding to vegetarian refreshments;With reference to figure 3b, image processing apparatus can also include:
Area determination unit 305, after obtaining deep image information in first acquisition unit, rendering unit renders people Before object image, character image region is determined in the character image;
Pixel value determining unit 306, for the character recognition and label according to corresponding to pixel in the character image region, in the personage The interruption pixel of mark interruption is determined in image-region;
Processing unit 307 is identified, for regenerating the character recognition and label of the interruption pixel;
First definition adjustment unit 308, for presetting definition scope according to corresponding to the character recognition and label, to the personage Definition is adjusted corresponding to pixel in image-region.
Such as pixel value determining unit 306, it can be specifically used for:
Character recognition and label corresponding to per a line pixel or each row pixel is scanned in the character image region successively, with Obtain identifying scanning result;
Determine to be interrupted and not in the character image area per a line or in each row pixel according to the mark scanning result The interruption pixel on domain border.
Alternatively, the present embodiment device can also include:Boundary pixel point processing unit;
The boundary pixel point processing unit, is specifically used for:
After the first definition adjustment unit 308 adjusts definition, rendering unit 304 renders to the character image Before, the boundary pixel point for being located at zone boundary in the character image region is determined;
It is currently right according to definition corresponding to background image region pixel in the character image and the boundary pixel point The definition answered, obtain corresponding definition scope
Definition corresponding to the boundary pixel point is adjusted according to the definition scope.
Alternatively, to save system resource, the present embodiment can also carry out resolution ratio to character image and handle, namely This in the present embodiment renders subelement 304, specifically can be used for:
Obtain position of the personage in the character image;
The character image is carried out according to the position and pre-set image resolution ratio to cut figure processing, to obtain the pre-set image point The target person image of resolution;
Depth information after the adjustment according to corresponding to pixel in the rendering position configured information and the target person image, Region is rendered in the target to render the target person image.When it is implemented, above unit can be used as independently Entity realize, can also be combined, realized as same or several entities, above unit it is specific Implement to can be found in embodiment of the method above, will not be repeated here.
The image processing apparatus be able to can also integrate in the terminal, the terminal can specifically wrap as independent entity Include in the equipment such as mobile phone, tablet personal computer, intelligent television, game cartridge, notebook computer or PC.
From the foregoing, it will be observed that from the foregoing, it will be observed that the embodiment of the present invention obtains request, then, the using the receive information of receiving unit 301 One acquiring unit 302 obtains the deep image information of personage according to the information acquisition request, and the deep image information includes:Personage The depth information of pixel in image, the character image, second acquisition unit 303 obtain rendering position configured information, and this is rendered Target in position indication information instruction interactive interface renders region, rendering unit 304 according to the rendering position configured information and Depth information corresponding to the pixel, render region in the target and the character image is rendered;The program can be based on figure As the depth information of pixel renders in target area to character image, to simulate sensation on the spot in person, relative to For prior art, image display effect can be improved, improve interactivity between user and terminal.
Example IV,
Accordingly, the embodiment of the present invention also provides a kind of terminal, as shown in figure 4, the terminal can include radio frequency (RF, Radio Frequency) circuit 401, include the memories 402, defeated of one or more computer-readable recording mediums Enter unit 403, display unit 404, sensor 405, voicefrequency circuit 406, Wireless Fidelity (WiFi, Wireless Fidelity) Module 407, include the position such as one or the processor 408 of more than one processing core and power supply 409.This area skill Art personnel are appreciated that the restriction of the terminal structure shown in Fig. 4 not structure paired terminal, can include more more or more than illustrating Few position, either combine some positions or different positions arrangement.Wherein:
RF circuits 401 can be used for receive and send messages or communication process in, the reception and transmission of signal, especially, by base station After downlink information receives, transfer to one or more than one processor 408 is handled;In addition, it is sent to up data are related to Base station.Generally, RF circuits 401 include but is not limited to antenna, at least one amplifier, tuner, one or more oscillators, use Family identity module (SIM, Subscriber Identity Module) card, transceiver, coupler, low-noise amplifier (LNA, Low Noise Amplifier), duplexer etc..In addition, RF circuits 401 can also pass through radio communication and network and its His equipment communication.The radio communication can use any communication standard or agreement, including but not limited to global system for mobile communications (GSM, Global System of Mobile communication), general packet radio service (GPRS, General Packet Radio Service), CDMA (CDMA, Code Division Multiple Access), wideband code division it is more Location (WCDMA, Wideband Code Division Multiple Access), Long Term Evolution (LTE, Long Term Evolution), Email, Short Message Service (SMS, Short Messaging Service) etc..
Memory 402 can be used for storage software program and module, and processor 408 is stored in memory 402 by operation Software program and module, so as to perform various function application and data processing.Memory 402 can mainly include storage journey Sequence area and storage data field, wherein, storing program area can storage program area, the application program (ratio needed at least one function Such as sound-playing function, image player function) etc.;Storage data field can store uses created data according to terminal (such as voice data, phone directory etc.) etc..In addition, memory 402 can include high-speed random access memory, can also include Nonvolatile memory, for example, at least a disk memory, flush memory device or other volatile solid-state parts.Phase Ying Di, memory 402 can also include Memory Controller, to provide processor 408 and input block 403 to memory 402 Access.
Input block 403 can be used for the numeral or character information for receiving input, and generation is set with user and function Control relevant keyboard, mouse, action bars, optics or the input of trace ball signal.Specifically, in a specific embodiment In, input block 403 may include touch sensitive surface and other input equipments.Touch sensitive surface, also referred to as touch display screen or tactile Control plate, collect user on or near it touch operation (such as user using any suitable object such as finger, stylus or Operation of the annex on touch sensitive surface or near touch sensitive surface), and corresponding connection dress is driven according to formula set in advance Put.Optionally, touch sensitive surface may include both touch detecting apparatus and touch controller.Wherein, touch detecting apparatus is examined The touch orientation of user is surveyed, and detects the signal that touch operation is brought, transmits a signal to touch controller;Touch controller from Touch information is received on touch detecting apparatus, and is converted into contact coordinate, then gives processor 408, and can reception processing Order that device 408 is sent simultaneously is performed.It is furthermore, it is possible to a variety of using resistance-type, condenser type, infrared ray and surface acoustic wave etc. Type realizes touch sensitive surface.Except touch sensitive surface, input block 403 can also include other input equipments.Specifically, other are defeated Physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse can be included but is not limited to by entering equipment One or more in mark, action bars etc..
Display unit 404 can be used for display by user input information or be supplied to user information and terminal it is various Graphical user interface, these graphical user interface can be made up of figure, text, icon, video and its any combination.Display Unit 404 may include display panel, optionally, can use liquid crystal display (LCD, Liquid Crystal Display), The forms such as Organic Light Emitting Diode (OLED, Organic Light-Emitting Diode) configure display panel.Further , touch sensitive surface can cover display panel, after touch sensitive surface detects the touch operation on or near it, send processing to Device 408 is followed by subsequent processing device 408 and provided on a display panel accordingly according to the type of touch event to determine the type of touch event Visual output.Although in Fig. 4, touch sensitive surface realizes input and input with display panel is the position independent as two Function, but in some embodiments it is possible to touch sensitive surface and display panel are integrated and realize input and output function.
Terminal may also include at least one sensor 405, such as optical sensor, motion sensor and other sensors. Specifically, optical sensor may include ambient light sensor and proximity transducer, wherein, ambient light sensor can be according to ambient light Light and shade adjust the brightness of display panel, proximity transducer can close display panel and/or the back of the body when terminal is moved in one's ear Light.As one kind of motion sensor, gravity accelerometer can detect in all directions (generally three axles) acceleration Size, size and the direction of gravity are can detect that when static, available for identification mobile phone posture application (such as horizontal/vertical screen switching, Dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;It can also configure as terminal Gyroscope, barometer, hygrometer, thermometer, the other sensors such as infrared ray sensor, will not be repeated here.
Voicefrequency circuit 406, loudspeaker, microphone can provide the COBBAIF between user and terminal.Voicefrequency circuit 406 can Electric signal after the voice data received is changed, is transferred to loudspeaker, and voice signal output is converted to by loudspeaker;It is another The voice signal of collection is converted to electric signal by aspect, microphone, and voice data is converted to after being received by voicefrequency circuit 406, then After voice data output processor 408 is handled, through RF circuits 401 to be sent to such as another terminal, or by voice data Export to memory 402 further to handle.Voicefrequency circuit 406 is also possible that earphone jack, with provide peripheral hardware earphone with The communication of terminal.
WiFi belongs to short range wireless transmission technology, and terminal can help user's transceiver electronicses postal by WiFi module 407 Part, browse webpage and access streaming video etc., it has provided the user wireless broadband internet and accessed.Although Fig. 4 is shown WiFi module 407, but it is understood that, it is simultaneously not belonging to must be configured into for terminal, can not change as needed completely Become in the essential scope of invention and omit.
Processor 408 is the control centre of terminal, using various interfaces and the various pieces of connection whole mobile phone, is led to Cross operation or perform the software program and/or module being stored in memory 402, and call and be stored in memory 402 Data, the various functions and processing data of terminal are performed, so as to carry out integral monitoring to mobile phone.Optionally, processor 408 can wrap Include one or more processing cores;Preferably, processor 408 can integrate application processor and modem processor, wherein, should Operating system, user interface and application program etc. are mainly handled with processor, modem processor mainly handles radio communication. It is understood that above-mentioned modem processor can not also be integrated into processor 408.
Terminal also includes the power supply 409 (such as battery) to the power supply of each position, it is preferred that power supply can pass through power supply pipe Reason system and processor 408 are logically contiguous, so as to realize management charging, electric discharge and power managed by power-supply management system Etc. function.Power supply 409 can also include one or more direct current or AC power, recharging system, power failure inspection The random component such as slowdown monitoring circuit, power supply changeover device or inverter, power supply status indicator.
Although being not shown, terminal can also include camera, bluetooth module etc., will not be repeated here.Specifically in this implementation In example, the processor 408 in terminal can be according to following instruction, by corresponding to the process of one or more application program Executable file is loaded into memory 402, and runs the application program being stored in memory 402 by processor 408, from And realize various functions:
Receive information obtains request;
The deep image information of personage is obtained according to the information acquisition request, the deep image information includes:Character image, The depth information of pixel in the character image;
Rendering position configured information is obtained, the rendering position configured information indicates that the target in interactive interface renders region;
According to depth information corresponding to the rendering position configured information and the pixel, region is rendered to the people in the target Object image is rendered.
Alternatively, the step is " according to depth information corresponding to the rendering position configured information and the pixel, in the target Region is rendered to render the character image " it can include:
Determine the first object position of the personage whether in the character image;
If it is not, then the depth information of pixel in the character image is adjusted, to be adjusted rear depth information;
Depth information after the adjustment according to corresponding to the rendering position configured information and the pixel, area is rendered in the target Domain renders to the character image.
Wherein, step " determining the first object position of personage whether in the character image " can include:
Obtain the bone coordinate of the personage, and by the bone Coordinate Conversion into image coordinate;
Obtain image coordinate point set corresponding to first object position;
Judge that image coordinate point is whether in preset coordinate region in the image coordinate point set;
If, it is determined that the first object position of the personage is in the character image;
If not, it is determined that the first object position of the personage is not in the character image.
In addition, processor 408, can also be implemented function such as:
After deep image information is obtained, before being rendered to the character image, people is determined in the character image Object image region;
According to character recognition and label corresponding to pixel in the character image region, determined in the character image region between identifying Disconnected interruption pixel;
Regenerate the character recognition and label of the interruption pixel;
Definition scope is preset according to corresponding to the character recognition and label, to clear corresponding to pixel in the character image region Degree is adjusted.
It is adjusted to definition corresponding to pixel in the character image region presetting definition scope according to this Afterwards, before being rendered to the character image, the boundary pixel point for being located at zone boundary in the character image region is determined;
It is currently right according to definition corresponding to background image region pixel in the character image and the boundary pixel point The definition answered, obtain corresponding definition scope
Definition corresponding to the boundary pixel point is adjusted according to the definition scope.
The specific implementation of each operation can be found in embodiment above above, will not be repeated here.
From the foregoing, it will be observed that the embodiment of the present invention obtains request using receive information, then, obtained according to the information acquisition request The deep image information of personage, the deep image information include:The depth information of pixel in character image, the character image, Rendering position configured information is obtained, the rendering position configured information indicates that the target in interactive interface renders region, according to the wash with watercolours Depth information corresponding to contaminating position indication information and the pixel, renders region in the target and the character image is rendered; The program can be rendered based on the depth information of image slices vegetarian refreshments in target area to character image, face it to simulate class origin The sensation in border, in terms of existing technologies, image display effect can be improved, improve interacting between user and terminal Property.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can To instruct the hardware of correlation to complete by program, the program can be stored in a computer-readable recording medium, storage Medium can include:Read-only storage (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), disk or CD etc..
A kind of image processing method and device provided above the embodiment of the present invention is described in detail, herein Apply specific case to be set forth the principle and embodiment of the present invention, the explanation of above example is only intended to help Understand the method and its core concept of the present invention;Meanwhile for those skilled in the art, according to the thought of the present invention, having There will be changes in body embodiment and application, in summary, this specification content should not be construed as to the present invention Limitation.

Claims (21)

  1. A kind of 1. image processing method, it is characterised in that including:
    Receive information obtains request;
    The deep image information of acquisition request personage is obtained according to described information, the deep image information includes:Character image, The depth information of pixel in the character image;
    Rendering position configured information is obtained, the rendering position configured information indicates that the target in interactive interface renders region;
    Determine the first object position of the personage whether in the character image;
    If it is not, then the depth information of pixel in the character image is adjusted, to be adjusted rear depth information;
    Depth information after the adjustment according to corresponding to the rendering position configured information and the pixel, is rendered in the target Region renders to the character image.
  2. 2. image processing method as claimed in claim 1, it is characterised in that whether the first object position for determining personage Specifically included in the step of character image:
    Obtain the bone coordinate of the personage, and by the bone Coordinate Conversion into image coordinate;
    Obtain image coordinate point set corresponding to first object position;
    Judge that image coordinate point is whether in preset coordinate region in described image coordinate point set;
    If, it is determined that the first object position of the personage is in the character image;
    If not, it is determined that the first object position of the personage is not in the character image.
  3. 3. image processing method as claimed in claim 2, it is characterised in that scheme in the judgement described image coordinate point set As coordinate points, whether the step in preset coordinate region specifically includes:
    According to coordinate value size corresponding to image coordinate point in described image coordinate point set, from described image coordinate point set Choose target image coordinate points;
    Judge the coordinate value of the target image coordinate points whether corresponding to preset coordinate region in the range of coordinate value;
    If so, then judge that image coordinate point is in preset coordinate region in described image coordinate point set;
    If it is not, then judge that image coordinate point inequality is in preset coordinate region in described image coordinate point set.
  4. 4. image processing method as claimed in claim 2, it is characterised in that the depth to pixel in the character image The step of degree information is adjusted specifically includes:
    Obtain the offset information between image coordinate point and the preset coordinate region in described image coordinate point set;
    The depth information of pixel in the character image is adjusted according to the offset information.
  5. 5. image processing method as claimed in claim 4, it is characterised in that the depth information includes:Corresponding to pixel Definition;The definition is definition of the pixel relative to background image in the interactive interface;
    Described the step of being adjusted according to the offset information to the depth information of pixel in the character image, specifically wraps Include:Definition corresponding to pixel in the character image is adjusted according to the offset information.
  6. 6. image processing method as claimed in claim 5, it is characterised in that it is described according to the offset information to the personage The step of definition corresponding to pixel is adjusted in image specifically includes:
    Corresponding target sharpness is obtained according to the offset information;
    Judge the target sharpness whether in preset threshold range;
    If so, then definition corresponding to pixel in the character image is adjusted according to the target sharpness.
  7. 7. the image processing method as described in claim any one of 1-6, it is characterised in that it is determined that first object position does not exist After in the character image, before being adjusted to the depth information of pixel in the character image, described image processing Method also includes:
    Obtain presently described first object position and do not continuously determine number in the character image;
    Continuously determine whether number is more than preset times described in judgement;
    If so, then perform the step of being adjusted to the depth information of pixel in the character image.
  8. 8. image processing method as claimed in claim 1, it is characterised in that it is described according to the rendering position configured information and Depth information corresponding to the pixel, render the step of region renders to the character image in the target and also wrap Include:
    When it is determined that the first object position of the personage is in the character image, the second target site of the personage is determined Whether in the character image;
    If it is not, then the depth information of pixel in the character image is adjusted, to be adjusted rear depth information;
    Depth information after the adjustment according to corresponding to the rendering position configured information and the pixel, is rendered in the target Region renders to the character image.
  9. 9. image processing method as claimed in claim 1, it is characterised in that the deep image information also includes:The picture Character recognition and label corresponding to vegetarian refreshments, the depth information include:Definition corresponding to the pixel;
    After deep image information is obtained, before being rendered to the character image, described image processing method also includes:
    Character image region is determined in the character image;
    According to character recognition and label corresponding to pixel in the character image region, determined in the character image region between identifying Disconnected interruption pixel;
    Regenerate the character recognition and label of the interruption pixel;
    Definition scope is preset according to corresponding to the character recognition and label, to clear corresponding to pixel in the character image region Degree is adjusted.
  10. 10. image processing method as claimed in claim 9, it is characterised in that described according to picture in the character image region Character recognition and label corresponding to vegetarian refreshments, it is specific the step of the interruption pixel of determination character recognition and label interruption in the character image region Including:
    Character recognition and label corresponding to per a line pixel or each row pixel is scanned in the character image region successively, with To mark scanning result;
    Determine to be interrupted and not in the character image area per a line or in each row pixel according to the mark scanning result The interruption pixel on domain border.
  11. 11. image processing method as claimed in claim 9, it is characterised in that according to the default definition scope to institute State after definition corresponding to pixel is adjusted in character image region, before being rendered to the character image, institute Stating image processing method also includes:
    Determine the boundary pixel point for being located at zone boundary in the character image region;
    It is currently right according to definition corresponding to background image region pixel in the character image and the boundary pixel point The definition answered, obtain corresponding definition scope
    Definition corresponding to the boundary pixel point is adjusted according to the definition scope.
  12. 12. image processing method as claimed in claim 1, it is characterised in that described according to the rendering position configured information And depth information after being adjusted corresponding to the pixel, render what region was rendered to the character image in the target Step specifically includes:
    Obtain position of the personage in the character image;
    The character image is carried out according to the position and pre-set image resolution ratio to cut figure processing, to obtain the pre-set image The target person image of resolution ratio;
    Depth information after the adjustment according to corresponding to pixel in the rendering position configured information and the target person image, Region is rendered in the target to render the target person image.
  13. A kind of 13. image processing apparatus, it is characterised in that including:
    Receiving unit, obtain and ask for receive information;
    First acquisition unit, for obtaining the deep image information of acquisition request personage, the depth image according to described information Information includes:The depth information of pixel in character image, the character image;
    Second acquisition unit, for obtaining rendering position configured information, the rendering position configured information is indicated in interactive interface Target render region;
    Rendering unit, including:First determination subelement, adjust subelement and render subelement;
    First determination subelement, for determining the first object position of the personage whether in the character image;
    The adjustment subelement, for it is determined that first object position not in the character image, in the character image The depth information of pixel is adjusted, to be adjusted rear depth information;
    It is described to render subelement, for depth after the adjustment according to corresponding to the rendering position configured information and the pixel Information, render region in the target and the character image is rendered.
  14. 14. image processing apparatus as claimed in claim 13, it is characterised in that first determination subelement, be specifically used for:
    Obtain the bone coordinate of the personage, and by the bone Coordinate Conversion into image coordinate;
    Obtain image coordinate point set corresponding to first object position;
    Judge that image coordinate point is whether in preset coordinate region in described image coordinate point set;
    If, it is determined that the first object position of the personage is in the character image;
    If not, it is determined that the first object position of the personage is not in the character image.
  15. 15. image processing apparatus as claimed in claim 14, it is characterised in that the adjustment subelement, be specifically used for:
    Obtain the offset information between image coordinate point and the preset coordinate region in described image coordinate point set;
    The depth information of pixel in the character image is adjusted according to the offset information.
  16. 16. image processing apparatus as claimed in claim 15, it is characterised in that the depth information includes:Pixel is corresponding Definition;The definition is definition of the pixel relative to background image in the interactive interface;
    The adjustment subelement, specifically for according to the offset information to definition corresponding to pixel in the character image It is adjusted.
  17. 17. image processing apparatus as claimed in claim 16, it is characterised in that the adjustment subelement is specifically used for:
    Corresponding target sharpness is obtained according to the offset information;
    Judge the target sharpness whether in preset threshold range;
    If so, then definition corresponding to pixel in the character image is adjusted according to the target sharpness.
  18. 18. image processing apparatus as claimed in claim 13, it is characterised in that the rendering unit also includes:Number judges Subelement;
    The number judgment sub-unit, for determining first object position not in the character image in the first determination subelement Afterwards, before adjustment subelement is adjusted to the depth information of pixel in the character image, presently described first is obtained Target site continuously determines number not in the character image, continuously determines whether number is more than default time described in judgement Number;
    The adjustment subelement, specifically for when the number judgment sub-unit is judged as YES, to picture in the character image The depth information of vegetarian refreshments is adjusted.
  19. 19. image processing apparatus as claimed in claim 13, it is characterised in that the rendering unit also includes:Second determines Subelement;
    Second determination subelement, for determining the first object position of the personage in the people in the first determination subelement When in object image, determine the second target site of the personage whether in the character image;
    The adjustment subelement, it is additionally operable to determine second target site not in character image in second determination subelement When middle, the depth information of pixel in the character image is adjusted, to be adjusted rear depth information.
  20. 20. image processing apparatus as claimed in claim 13, it is characterised in that the deep image information also includes:It is described Character recognition and label corresponding to pixel, the depth information include:Definition corresponding to the pixel;Described image processing unit Also include:
    Area determination unit, after obtaining deep image information in first acquisition unit, rendering unit renders character image Before, character image region is determined in the character image;
    Pixel value determining unit, for the character recognition and label according to corresponding to pixel in the character image region, in the figure map Interruption pixel as determining mark interruption in region;
    Processing unit is identified, for regenerating the character recognition and label of the interruption pixel;
    First definition adjustment unit, for presetting definition scope according to corresponding to the character recognition and label, to the figure map Definition is adjusted as corresponding to pixel in region.
  21. 21. image processing apparatus as claimed in claim 13, it is characterised in that it is described to render subelement, it is specifically used for:
    Obtain position of the personage in the character image;
    The character image is carried out according to the position and pre-set image resolution ratio to cut figure processing, to obtain the pre-set image The target person image of resolution ratio;
    Depth information after the adjustment according to corresponding to pixel in the rendering position configured information and the target person image, Region is rendered in the target to render the target person image.
CN201610463857.6A 2016-06-23 2016-06-23 A kind of image processing method and device Active CN106097429B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610463857.6A CN106097429B (en) 2016-06-23 2016-06-23 A kind of image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610463857.6A CN106097429B (en) 2016-06-23 2016-06-23 A kind of image processing method and device

Publications (2)

Publication Number Publication Date
CN106097429A CN106097429A (en) 2016-11-09
CN106097429B true CN106097429B (en) 2017-11-28

Family

ID=57252278

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610463857.6A Active CN106097429B (en) 2016-06-23 2016-06-23 A kind of image processing method and device

Country Status (1)

Country Link
CN (1) CN106097429B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110992271A (en) * 2020-03-04 2020-04-10 腾讯科技(深圳)有限公司 Image processing method, path planning method, device, equipment and storage medium

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107123077B (en) * 2017-03-30 2019-01-08 腾讯科技(深圳)有限公司 The rendering method and device of object
CN111275793B (en) * 2018-12-05 2023-09-29 北京金山办公软件股份有限公司 Text rendering method and device, electronic equipment and storage medium
CN113206971B (en) * 2021-04-13 2023-10-24 聚好看科技股份有限公司 Image processing method and display device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103379256A (en) * 2012-04-25 2013-10-30 华为终端有限公司 Method and device for processing image
CN104599231A (en) * 2015-01-16 2015-05-06 汕头大学 Dynamic portrait synchronizing method based on Kinect and network camera

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4853320B2 (en) * 2007-02-15 2012-01-11 ソニー株式会社 Image processing apparatus and image processing method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103379256A (en) * 2012-04-25 2013-10-30 华为终端有限公司 Method and device for processing image
CN104599231A (en) * 2015-01-16 2015-05-06 汕头大学 Dynamic portrait synchronizing method based on Kinect and network camera

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110992271A (en) * 2020-03-04 2020-04-10 腾讯科技(深圳)有限公司 Image processing method, path planning method, device, equipment and storage medium
CN110992271B (en) * 2020-03-04 2020-07-07 腾讯科技(深圳)有限公司 Image processing method, path planning method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN106097429A (en) 2016-11-09

Similar Documents

Publication Publication Date Title
RU2632153C2 (en) Method, device and terminal for displaying virtual keyboard
US10269163B2 (en) Method and apparatus for switching real-time image in instant messaging
CN106097429B (en) A kind of image processing method and device
CN106961711B (en) Method and device for controlling mobile terminal to register network and mobile terminal
CN104519485B (en) Communication means, device and system between a kind of terminal
CN104350776B (en) A kind of method, wireless communication devices and the terminal of adjustment signal measurement period
CN104751410B (en) Image and two-dimensional code fusion method and device
CN103714161B (en) The generation method of image thumbnails, device and terminal
CN113543179A (en) Non-connection state measuring method, terminal and base station
CN111343699A (en) Icon display method and device, storage medium and electronic equipment
CN107748941A (en) A kind of order allocation method and device based on electric automobile
CN104463105B (en) Guideboard recognition methods and device
CN106961676B (en) Network searching method, device and medium
CN109814968A (en) A kind of data inputting method, terminal device and computer readable storage medium
CN106570847B (en) The method and apparatus of image procossing
CN106959761A (en) A kind of terminal photographic method, device and terminal
CN105005432A (en) Method and device for controlling terminal to operate
CN110022553A (en) A kind of subscriber identification card management method and mobile terminal
CN107959952A (en) A kind of detection method and terminal of different system cell
CN104820546B (en) Function information methods of exhibiting and device
CN105957544A (en) Lyric display method and device
KR20220038414A (en) Communication processing method, apparatus, apparatus and medium
CN106791084A (en) The synchronous method and mobile terminal of personalizing parameters
CN107734152A (en) A kind of mobile terminal and brightness control method with illumination functions
CN104951202B (en) A kind of method and device showing chat content

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant