CN104766354A - Method for augmented reality drawing and mobile terminal - Google Patents

Method for augmented reality drawing and mobile terminal Download PDF

Info

Publication number
CN104766354A
CN104766354A CN201510121235.0A CN201510121235A CN104766354A CN 104766354 A CN104766354 A CN 104766354A CN 201510121235 A CN201510121235 A CN 201510121235A CN 104766354 A CN104766354 A CN 104766354A
Authority
CN
China
Prior art keywords
user
mobile terminal
touch location
target object
touch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510121235.0A
Other languages
Chinese (zh)
Other versions
CN104766354B (en
Inventor
杨旭腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN CHUANGMENG TIANDI TECHNOLOGY CO LTD
Original Assignee
SHENZHEN CHUANGMENG TIANDI TECHNOLOGY CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN CHUANGMENG TIANDI TECHNOLOGY CO LTD filed Critical SHENZHEN CHUANGMENG TIANDI TECHNOLOGY CO LTD
Priority to CN201510121235.0A priority Critical patent/CN104766354B/en
Publication of CN104766354A publication Critical patent/CN104766354A/en
Application granted granted Critical
Publication of CN104766354B publication Critical patent/CN104766354B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention discloses a method for augmented reality drawing. The method comprises the steps that if the similarity of a target image in a framing picture shot by a camera of a mobile terminal and a preset picture is larger than or equal to the preset threshold value, a 3D image corresponding to the preset picture is generated on a display interface of the mobile terminal; a first color instruction selected by a user is obtained; the touch position of the user is detected; if it is detected that the touch position of the user is in the surface area of a target object in the 3D image corresponding to the preset picture, a first color block corresponding to the first color instruction is generated in the preset size range of the touch position of the user in the surface area of the target object according to a first preset mode; the 3D image comprising the first color block is displayed on the display interface of the mobile terminal. The embodiment of the invention further discloses the mobile terminal. By means of the embodiment, the success rate of interface operating of the augmented reality drawing can be increased, and the user experience is improved.

Description

A kind of method that augmented reality is drawn and mobile terminal
Technical field
The embodiment of the present invention relates to field of computer technology, refers more particularly to method and the mobile terminal of the drawing of a kind of augmented reality.
Background technology
Augmented reality AR (Augmented Reality, be called for short AR) be a kind of computer utility and human-computer interaction technology that grow up on the basis of virtual reality technology, its computer and visualization technique are by virtual Information application to real world, and be added in real time same picture or space of real environment and virtual object exists simultaneously.Briefly, be exactly that actual situation combines.Main application fields comprises: LBS (Location Based Service, location Based service), weather, game, range finding, advertisement etc.
At present, the development of AR is still in formative stage, and a lot of field is not yet better developed.As at painting field, in prior art, it is also enter drawing interface by scanned picture that AR draws, but in its scanning process, once there is picture defect to some extent, need to attempt repeatedly just entering drawing interface, Consumer's Experience effect is poor, how solving AR drawing interface, to enter successful operation rate low, and Consumer's Experience effect is poor is problem demanding prompt solution.
Summary of the invention
Embodiments provide method and the mobile terminal of the drawing of a kind of augmented reality, cocoa improves and strengthens the success ratio that operation interface is carried out in drawing, improves Consumer's Experience.
Embodiment of the present invention first aspect provides a kind of method that augmented reality is drawn, and comprises step:
Identify the target image of finding a view in picture that the camera of mobile terminal photographs;
Whether the similarity of find a view target image in picture and the preset picture that judge that described camera photographs is more than or equal to predetermined threshold value;
If the target image that what described camera photographed find a view in picture and the similarity of preset picture are more than or equal to predetermined threshold value, the display interface of described mobile terminal generates the 3D rendering corresponding with described preset picture;
Obtain the first color instruction that user selects;
Detect the touch location of user;
If detect in the surf zone of the target object of the touch location of described user in the 3D rendering that described preset picture is corresponding, generate first color block corresponding with described first color instruction according to the pre-set dimension magnitude range of the touch location of the described user of the first predetermined manner in the surf zone of described target object; On the display interface of described mobile terminal, display comprises the 3D rendering of described first color block.
Embodiment of the present invention second aspect provides a kind of mobile terminal, comprising:
Recognition unit, for identifying the target image of finding a view in picture that the camera of mobile terminal photographs;
Judging unit, whether the similarity for find a view target image in picture and the preset picture that judge that described camera photographs is more than or equal to predetermined threshold value;
Display unit, if the similarity of find a view target image in picture and the preset picture that photograph for described camera is more than or equal to predetermined threshold value, the display interface of described mobile terminal generates the 3D rendering corresponding with described preset picture;
First acquiring unit, for obtaining the first color instruction that user selects;
Detecting unit, for detecting the touch location of user;
Generation unit, for if the target object of the touch location of described user in the 3D rendering that described preset picture is corresponding detected surf zone in, generate the first corresponding with described first color instruction color block according to the pre-set dimension magnitude range of the touch location of the described user of the first predetermined manner in the surf zone of described target object;
Described display unit, also on the display interface at described mobile terminal, display comprises the 3D rendering of described first color block.
Implement the embodiment of the present invention, there is following beneficial effect:
The target image of finding a view in picture that the camera of embodiment of the present invention identification mobile terminal photographs; Whether the similarity of find a view target image in picture and the preset picture that judge that described camera photographs is more than or equal to predetermined threshold value; If the target image that what described camera photographed find a view in picture and the similarity of preset picture are more than or equal to predetermined threshold value, the display interface of described mobile terminal generates the 3D rendering corresponding with described preset picture; Obtain the first color instruction that user selects; Detect the touch location of user; If detect in the surf zone of the target object of the touch location of described user in the 3D rendering that described preset picture is corresponding, generate first color block corresponding with described first color instruction according to the pre-set dimension magnitude range of the touch location of the described user of the first predetermined manner in the surf zone of described target object; On the display interface of described mobile terminal, display comprises the 3D rendering of described first color block.Adopt the embodiment of the present invention to improve and strengthen the success ratio that operation interface is carried out in drawing, and improve Consumer's Experience.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in the embodiment of the present invention, be briefly described to the accompanying drawing used required in embodiment, description below, apparently, accompanying drawing in the following describes is only some embodiments of the embodiment of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
The schematic flow sheet of the first embodiment of the method that Fig. 1 draws for a kind of augmented reality that the embodiment of the present invention provides;
The schematic flow sheet of the second embodiment of the method that Fig. 2 draws for a kind of augmented reality that the embodiment of the present invention provides;
The schematic flow sheet of the 3rd embodiment of the method that Fig. 3 draws for a kind of augmented reality that the embodiment of the present invention provides;
The schematic flow sheet of the 4th embodiment of the method that Fig. 4 draws for a kind of augmented reality that the embodiment of the present invention provides;
The schematic flow sheet of the 5th embodiment of the method that Fig. 5 draws for a kind of augmented reality that the embodiment of the present invention provides;
The schematic flow sheet of the 6th embodiment of the method that Fig. 6 draws for a kind of augmented reality that the embodiment of the present invention provides;
The schematic flow sheet of the 7th embodiment of the method that Fig. 7 draws for a kind of augmented reality that the embodiment of the present invention provides;
The structural representation of the first embodiment of a kind of mobile terminal that Fig. 8 a provides for the embodiment of the present invention;
The another structural representation of the first embodiment of a kind of mobile terminal that Fig. 8 b provides for the embodiment of the present invention;
The another structural representation of the first embodiment of a kind of mobile terminal that Fig. 8 c provides for the embodiment of the present invention;
The another structural representation of the first embodiment of a kind of mobile terminal that Fig. 8 d provides for the embodiment of the present invention;
The structural representation of the second embodiment of a kind of mobile terminal that Fig. 9 provides for the embodiment of the present invention;
The structural representation of the 3rd embodiment of a kind of mobile terminal that Figure 10 provides for the embodiment of the present invention;
The structural representation of the 4th embodiment of a kind of mobile terminal that Figure 11 provides for the embodiment of the present invention;
The structural representation of the 5th embodiment of a kind of mobile terminal that Figure 12 provides for the embodiment of the present invention;
The structural representation of the 6th embodiment of a kind of mobile terminal that Figure 13 provides for the embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only embodiment of the present invention part embodiment, instead of whole embodiments.Based on the embodiment in the embodiment of the present invention, those of ordinary skill in the art are not making the every other embodiment obtained under creative work prerequisite, all belong to the scope of embodiment of the present invention protection.
In realization, in the embodiment of the present invention, equipment can include but not limited to: notebook computer, mobile phone, panel computer, intelligent wearable device etc.The system of equipment refers to the operating system of equipment, can include but not limited to: android system, Saipan system, Windows system, IOS (Mobile operating system of Apple's exploitation), XP system, windows 8 system etc.It should be noted that, Android device refers to the equipment of android system, and Saipan equipment refers to the equipment of Saipan system, etc.The said equipment is only citing, and non exhaustive, including but not limited to the said equipment.
Composition graphs 1 to Figure 13 is described the method for a kind of augmented reality drawing that the embodiment of the present invention provides and mobile terminal.
Refer to Fig. 1, Fig. 1 is the schematic flow sheet of the first embodiment of the method for a kind of augmented reality drawing that the embodiment of the present invention provides.The method that augmented reality described in the present embodiment is drawn, comprises step:
S101, identify the target image of finding a view in picture that the camera of mobile terminal photographs.
Wherein, the target image of finding a view in picture that the camera of mobile terminal identifiable design mobile terminal photographs.Further, the target image of finding a view in picture that mobile terminal can adopt the camera of preset algorithm to mobile terminal to photograph identifies, as: contours extract can be carried out to picture of finding a view, to realize identifying the profile of target image, or, extraction Corner Feature can be carried out to picture of finding a view, to realize identifying etc. the Corner Feature of target image, identify with the target image of finding a view in picture realized camera photographs.
As a kind of possible embodiment, mobile terminal identifies the target image of finding a view in picture that the camera of mobile terminal photographs after opening the relevant software of augmented reality drawing by user.
Whether S102, the similarity of find a view target image in picture and preset picture judging that described camera photographs are more than or equal to predetermined threshold value.
Wherein, whether the target image that what mobile terminal can judge that its camera photographs find a view in picture and the similarity of preset picture are more than or equal to predetermined threshold value, mobile terminal can extract the Corner Feature information of the target image of finding a view in picture and the Corner Feature information of preset picture, the Corner Feature information of target image and the Corner Feature information of preset picture are carried out contrast concrete, specifically can see the embodiment of the present invention described by Fig. 2.
Further, preset picture can include but are not limited to: Quick Response Code or general pattern.Predetermined threshold value can be set to the constant fixed as: 0.8 ~ 1, also can arrange according to concrete environment, the method of ambient As threshold value as concrete in a kind of basis can be, first define: brightness value selects common define method to define: 0 for black, 255 is white, changes between the scope 0 ~ 255 of brightness value.In addition, under ecotopia, the target image that what camera photographed find a view in picture and the similarity of preset picture are 1, and the target image of finding a view in picture that namely camera photographs is the same with the number of the feature point number of preset picture, and contour similarity is the same; Then, obtain the mean flow rate AL (average luminance, AL) of the picture place environment of finding a view of camera, as: mean flow rate AL is more than or equal to 0, and time mean flow rate AL is less than or equal to 50, predetermined threshold value can be defined as 0.5; Mean flow rate AL is greater than 50, and time mean flow rate AL is less than or equal to 100, predetermined threshold value can be defined as 0.6; Mean flow rate is that AL is greater than 100, and time mean flow rate AL is less than or equal to 125, predetermined threshold value can be defined as 0.8; Mean flow rate AL is greater than 125, and time mean flow rate AL is less than or equal to 255, predetermined threshold value can be defined as 0.5.
As a kind of possible embodiment, predetermined threshold value can be arranged more accurately, when namely predetermined threshold value being arranged, not only can consider the mean flow rate of environment, the distance between camera and the center of target image can also be further considered, or, the corner dimension between camera and target image, this set-up mode is comparatively complicated, is not repeated herein.
If the target image that what the described camera of S103 photographed find a view in picture and the similarity of preset picture are more than or equal to predetermined threshold value, the display interface of described mobile terminal generates the 3D rendering corresponding with described preset picture.
Wherein, if the similarity of find a view target image in picture and preset picture that the camera of mobile terminal photographs is more than or equal to predetermined threshold value, the display interface of mobile terminal generates the 3D rendering corresponding with preset picture.Further, preset picture can be multiple, often open preset picture all with corresponding 3D rendering.If the target image that what the camera of mobile terminal photographed find a view in picture and the similarity of preset picture are less than predetermined threshold value, the 3D rendering corresponding with preset picture cannot be generated on the display interface of mobile terminal.
As a kind of possible embodiment, the camera of mobile terminal is facing to target image, when mobile terminal rotates, as long as when the camera of mobile terminal photographs target image, the 3D rendering that preset picture is corresponding appears in the display interface of mobile terminal all the time, when mobile terminal moves, the target object in 3D rendering also can along with rotation of mobile terminal.
As a kind of possible embodiment, the camera of mobile terminal can face toward target image always, and when the camera of mobile terminal leaves target image, the 3D rendering of the display interface display of mobile terminal disappears.
As a kind of possible embodiment, after mobile terminal can generate the 3D rendering corresponding with described preset picture on its display interface, the camera of mobile terminal leaves the target image of finding a view in picture.
The first color instruction that S104, acquisition user select.
Wherein, the first color instruction that acquisition for mobile terminal user selects, wherein, mobile terminal can generate the first corresponding color getting the first color instruction, and the first color can include but are not limited to: blueness, white, black, purple, green, yellow, orange, pink colour, coffee color, local tyrant's gold, iris etc.User by mobile terminal open augmented reality draw related software after, the display interface of mobile terminal is found the work of corresponding drawing color, when user selects blueness, when namely user hits blue option, user target image in the 3 d image can describe out blueness.When user clicks blue option again, user can remove blueness, and user can select other colors again.
The touch location of S105, detection user.
Wherein, mobile terminal is by the touch location of the touch induction device detection user of mobile terminal, or mobile terminal can detect the touch location that user uses felt pen to click.
If S106 detects in the surf zone of the target object of the touch location of described user in the 3D rendering that described preset picture is corresponding, generate first color block corresponding with described first color instruction according to the pre-set dimension magnitude range of the touch location of the described user of the first predetermined manner in the surf zone of described target object.
Wherein, if detect in the surf zone of the target object of the touch location of user in the 3D rendering that preset picture is corresponding, generate first color block identical with described first color according to the pre-set dimension magnitude range of the touch location of the user of the first predetermined manner in the surf zone of described target object.Wherein, the color of the first color block is the same with the first color.Further, the first predetermined manner can be according to detecting that the time span of user at touch location generates the first color block of pre-set dimension magnitude range.Such as, time span is longer, and the area of the first color block of generation is larger, wherein, can arrange the upper limit to pre-set dimension scope.Further, the first predetermined manner can be according to detecting that the touch size of user at touch location generates the first color block of pre-set dimension magnitude range.Such as, touch area larger, the area of the first color block of generation is larger.
S107, on the display interface of described mobile terminal display comprise the 3D rendering of described first color block.
Wherein, mobile terminal shows the 3D rendering comprising the first color block on its display interface.
As a kind of possible embodiment, after on the display interface of described mobile terminal, display comprises the 3D rendering of described first color block, when getting the reduction instruction of user's input, the 3D rendering after described adjustment can be reduced into described corresponding with described preset picture 3D rendering by mobile terminal.
As a kind of possible embodiment, the second color that acquisition for mobile terminal user selects, wherein, the second color is other colors except the first color, as: the first color that user obtains is blue, and the second color is yellow.After mobile terminal shows and comprises the 3D rendering of the first color block on the display interface of its mobile terminal, under this has several situation to need to specifically describe: if 1 mobile terminal the target object of the touch location of user in the 3D rendering that preset picture is corresponding again detected surf zone in and outside the region of the first color block, generate second color block identical with described second color according to the pre-set dimension magnitude range of the touch location of the described user of the first predetermined manner in the surf zone of described target object; If in the surf zone that 2 mobile terminals detect the target object of the touch location of user in the 3D rendering that preset picture is corresponding again and when the inward flange position of the region of the first color block, second color block identical with described second color is generated according to the pre-set dimension magnitude range of the touch location of the described user of the first predetermined manner in the surf zone of described target object, wherein, the second color block can cover the corresponding part of the first color block in the part of the first color block inside; 3, as a kind of special situation, if mobile terminal detects that the touch location of user is consistent with the touch location of the user generating the first color block again, and when touching time span or touch size is consistent, the second color block can cover the first color block completely.Further, user can select other colors except the first color and the second color to scribble, and concrete grammar is identical with the method for two kinds of colors, does not repeat them here.
The target image of finding a view in picture that the camera of embodiment of the present invention identification mobile terminal photographs; Whether the similarity of find a view target image in picture and the preset picture that judge that described camera photographs is more than or equal to predetermined threshold value; If the target image that what described camera photographed find a view in picture and the similarity of preset picture are more than or equal to predetermined threshold value, the display interface of described mobile terminal generates the 3D rendering corresponding with described preset picture; Obtain the first color that user selects; Detect the touch location of user; If detect in the surf zone of the target object of the touch location of described user in the 3D rendering that described preset picture is corresponding, generate first color block identical with described first color according to the pre-set dimension magnitude range of the touch location of the described user of the first predetermined manner in the surf zone of described target object; On the display interface of described mobile terminal, display comprises the 3D rendering of described first color block.Adopt the embodiment of the present invention to improve and strengthen the success ratio that operation interface is carried out in drawing, improve Consumer's Experience.
Refer to Fig. 2, Fig. 2 is the schematic flow sheet of the second embodiment of the method for a kind of augmented reality drawing that the embodiment of the present invention provides.The concrete refinement of the step S102 in the method that the augmented reality of method described by Fig. 1 that the augmented reality described in the present embodiment is drawn is drawn, comprises step:
S201, the characteristic information of find a view target image in picture and described preset picture extracting that described camera photographs.
Wherein, mobile terminal extracts the characteristic information of find a view target image in picture and the preset picture that camera photographs.Characteristic information can include but are not limited to: profile information, angle point information, characteristic color, target identification (antifalsification label, bar code, Quick Response Code, tag marking etc.).Such as, mobile terminal can adopt the find a view target image in picture and the Corner Feature information in preset picture that harris Angular Point Extracting Method extraction camera photographs.
Whether the ratio of what S202, more described camera photographed the find a view characteristic information of the target image in picture and the characteristic information of described preset picture is more than or equal to described predetermined threshold value.
Wherein, whether the ratio that mobile terminal compares the characteristic information of the target image in picture of finding a view that camera photographs and the characteristic information of preset picture is more than or equal to described predetermined threshold value.The ratio of the characteristic information of the target image in picture of finding a view that mobile terminal can photograph camera and the characteristic information of preset picture compares, as the angle point number in: the angle point number of the target image of finding a view in picture that the camera extracted can be photographed and preset picture carries out asking ratio.
The target image that what the embodiment of the present invention extracted that described camera photographs find a view in picture and the characteristic information of described preset picture; Whether the ratio of what more described camera photographed the find a view characteristic information of the target image in picture and the characteristic information of described preset picture is more than or equal to described predetermined threshold value.Adopt the embodiment of the present invention can to arrange the predetermined threshold value that the similarity between target image and preset picture judges, with the similarity of find a view target image in picture and the preset picture that compare that camera absorbs.
Refer to Fig. 3, Fig. 3 is the schematic flow sheet of the 3rd embodiment of the method for a kind of augmented reality drawing that the embodiment of the present invention provides.The concrete refinement of the step S106 in the method that the augmented reality of method described by Fig. 1 that the augmented reality described in the present embodiment is drawn is drawn, comprises step:
S301, determine the touch size of described user at described touch location.
Wherein, mobile terminal determines the touch size of user at touch location.Namely mobile terminal detects the touch size of user at touch location by the touch sensor of himself.
S302, generate first color block identical with described first color according to described touch size in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of described target object.
Wherein, mobile terminal according to touch size the pre-set dimension magnitude range of the touch location of the user of the surf zone of target object generate the first color instruction corresponding the first color block.Touch area larger, the first color block of generation is larger.
As a kind of possible embodiment, mobile terminal determines the touch time span of user at touch location, generates first color block identical with described first color according to touching the pre-set dimension magnitude range of time span at the touch location of the user of the surf zone of target object.
As a kind of possible embodiment, mobile terminal determines the touch size of user at described touch location, and according to touching size after the first color block that the pre-set dimension magnitude range generation of the touch location of the user of the surf zone of target object is corresponding with the first color instruction, mobile terminal detects the touch location of user again, under this has several situation to need specific descriptions: 1, if in the surf zone that mobile terminal detects the target object of the touch location of user in the 3D rendering that preset picture is corresponding again and outside the region of the first color block, first color block corresponding with the first color instruction is generated according to touching the touch size according to user of size at the touch location of the user of the surf zone of target object.; If in the surf zone that 2 mobile terminals detect the target object of the touch location of user in the 3D rendering that preset picture is corresponding again and when the inward flange position of the region of the first color block, new first color block corresponding with the first color instruction is generated according to touching the touch size according to user of size at the touch location of the user of the surf zone of target object, wherein, the first new color block can cover the corresponding part of the first color block in the part of the first original color block inside.Further, user can select the second color instruction, and when the surf zone of the touch location of user at target object being detected, the size according to the touch area of user generates second color block corresponding with the second color instruction, concrete grammar is identical with said method, does not repeat them here.
As a kind of possible implementation method, determining the touch time span of user at described touch location, generating first color block identical with described first color according to touching the pre-set dimension magnitude range of time size at the touch location of the user of the surf zone of target object.Specific implementation and above-mentioned expressed touch size similar, do not repeat them here.
The embodiment of the present invention determines the touch size of described user at described touch location; First color block identical with described first color is generated in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of described target object according to described touch size.Adopt the embodiment of the present invention can according to the size of the touch time span determination color block of the touch area of user or user.
Refer to Fig. 4, Fig. 4 is the schematic flow sheet of the 4th embodiment of the method for a kind of augmented reality drawing that the embodiment of the present invention provides.The subsequent step of the method that the augmented reality of method described by Fig. 1 that the augmented reality described in the present embodiment is drawn is drawn, comprises step S408 ~ S410, specific as follows:
S401, identify the target image of finding a view in picture that the camera of mobile terminal photographs.
Wherein, the specific descriptions of step S401 refer to the step S101 of the method that the augmented reality described by Fig. 1 is drawn.
Whether S402, the similarity of find a view target image in picture and preset picture judging that described camera photographs are more than or equal to predetermined threshold value.
Wherein, the specific descriptions of step S402 refer to the step S102 of the method that the augmented reality described by Fig. 1 is drawn.
If the target image that what the described camera of S403 photographed find a view in picture and the similarity of preset picture are more than or equal to predetermined threshold value, the display interface of described mobile terminal generates the 3D rendering corresponding with described preset picture.
Wherein, the specific descriptions of step S403 refer to the step S103 of the method that the augmented reality described by Fig. 1 is drawn.
The first color instruction that S404, acquisition user select.
Wherein, the specific descriptions of step S404 refer to the step S104 of the method that the augmented reality described by Fig. 1 is drawn.
The touch location of S405, detection user.
Wherein, the specific descriptions of step S405 refer to the step S105 of the method that the augmented reality described by Fig. 1 is drawn.
If S406 detects in the surf zone of the target object of the touch location of described user in the 3D rendering that described preset picture is corresponding, generate first color block corresponding with described first color instruction according to the pre-set dimension magnitude range of the touch location of the described user of the first predetermined manner in the surf zone of described target object.
Wherein, the specific descriptions of step S406 refer to the step S106 of the method that the augmented reality described by Fig. 1 is drawn.
S407, on the display interface of described mobile terminal display comprise the 3D rendering of described first color block.
Wherein, the specific descriptions of step S407 refer to the step S107 of the method that the augmented reality described by Fig. 1 is drawn.
The elimination instruction that S408, acquisition user select.
Wherein, mobile terminal can obtain the elimination instruction that user selects.Mobile terminal can be used for elimination first color block in instruction.
The touch location of S409, detection user.
Wherein, mobile terminal detects the touch location of user.The place of not scribbling, mobile terminal cannot wipe out the original color of 3D rendering.
If S410 detects that the touch location of described user is in the regional extent of described first color block, eliminate the first color of the touch location of the described user in described first color block according to the second predetermined manner.
Wherein, if mobile terminal detects that the touch location of user is in the regional extent of the first color block, the first color of the touch location of the user in the first color block is eliminated according to the second predetermined manner.
Further, the second predetermined manner can be: mobile terminal eliminates the first color of the touch location of the user in the first color block at the touch size of touch location according to user; Or the second predetermined manner can be: mobile terminal eliminates the first color of the touch location of the user in the first color block in the touch time span of touch location according to user.Mobile terminal is when getting the elimination instruction that user selects, the first color of the touch location of the user in the first color block can be eliminated in the touch size of touch location or the touch time span of user according to user, concrete original and mobile terminal is when getting the first color instruction that user selects, the principle generating the first color block is identical, does not repeat them here.
The target image of finding a view in picture that the camera of embodiment of the present invention identification mobile terminal photographs; Whether the similarity of find a view target image in picture and the preset picture that judge that described camera photographs is more than or equal to predetermined threshold value; If the target image that what described camera photographed find a view in picture and the similarity of preset picture are more than or equal to predetermined threshold value, the display interface of described mobile terminal generates the 3D rendering corresponding with described preset picture; Obtain the first color instruction that user selects; Detect the touch location of user; If detect in the surf zone of the target object of the touch location of described user in the 3D rendering that described preset picture is corresponding, generate first color block corresponding with described first color instruction according to the pre-set dimension magnitude range of the touch location of the described user of the first predetermined manner in the surf zone of described target object; On the display interface of described mobile terminal, display comprises the 3D rendering of described first color block; Obtain the rubber instruction that user selects; Detect the touch location of user; If detect, the touch location of described user is in the regional extent of described first color block, eliminates the first color of the touch location of the described user in described first color block according to the second predetermined manner.Adopt the embodiment of the present invention can carry out scribble operation to the 3D rendering generated, and scribble color can be wiped out, improve Consumer's Experience.
Refer to Fig. 5, Fig. 5 is the schematic flow sheet of the 5th embodiment of the method for a kind of augmented reality drawing that the embodiment of the present invention provides.The method that augmented reality described in the present embodiment is drawn, comprises step:
S501, identify the target image of finding a view in picture that the camera of mobile terminal photographs.
Wherein, the specific descriptions of step S501 refer to the step S101 of the method that the augmented reality described by Fig. 1 is drawn.
S502, judge whether the target image of finding a view in picture that described camera photographs is more than or equal to predetermined threshold value with the similarity of preset picture.
Wherein, the specific descriptions of step S502 refer to the step S102 of the method that the augmented reality described by Fig. 1 is drawn.
If the target image that what the described camera of S503 photographed find a view in picture and the similarity of preset picture are more than or equal to predetermined threshold value, the display interface of described mobile terminal generates the 3D rendering corresponding with described preset picture.
Wherein, the specific descriptions of step S503 refer to the step S103 of the method that the augmented reality described by Fig. 1 is drawn.
The 3D rendering stereomutation instruction that S504, acquisition user select, described 3D rendering stereomutation instruction is that volume increases instruction or volume reduces instruction.
Wherein, the 3D rendering stereomutation instruction that acquisition for mobile terminal user selects, stereomutation instruction is used for changing the volume of target object in 3D rendering, and 3D rendering stereomutation instruction can be volume increases instruction or volume reduces instruction.
The touch location of S505, detection user.
Wherein, the specific descriptions of step S505 refer to the step S105 of the method that the augmented reality described by Fig. 1 is drawn.
If S506 detects in the surf zone of the target object of the touch location of described user in the 3D rendering that described preset picture is corresponding, adjust the volume of described target object according to the pre-set dimension magnitude range of the touch location of the described user of the 3rd predetermined manner in the surf zone of described target object.
Wherein, if mobile terminal detects in the surf zone of the target object of the touch location of user in the 3D rendering that preset picture is corresponding, adjust the volume of described target object according to the pre-set dimension magnitude range of the touch location of the 3rd user of predetermined manner in the surf zone of target object.3rd predetermined manner can method below parameter described by Fig. 6 or Fig. 7.
S507, show on the display interface of described mobile terminal adjustment after 3D rendering.
Wherein, the specific descriptions of step S507 refer to the step S107 of the method that the augmented reality described by Fig. 1 is drawn.
The target image of finding a view in picture that the camera of embodiment of the present invention identification mobile terminal photographs; Judge whether the target image of finding a view in picture that described camera photographs is more than or equal to predetermined threshold value with the similarity of preset picture; If the target image that what described camera photographed find a view in picture and the similarity of preset picture are more than or equal to predetermined threshold value, the display interface of described mobile terminal generates the 3D rendering corresponding with described preset picture; Obtain the 3D rendering stereomutation instruction that user selects, described 3D rendering stereomutation instruction is that volume increases instruction or volume reduces instruction; Detect the touch location of user; If detect in the surf zone of the target object of the touch location of described user in the 3D rendering that described preset picture is corresponding, adjust the volume of described target object according to the pre-set dimension magnitude range of the touch location of the described user of the 3rd predetermined manner in the surf zone of described target object; The display interface of described mobile terminal shows the 3D rendering after adjustment.The picture of finding a view adopting the embodiment of the present invention can absorb according to the camera of mobile terminal generates corresponding 3D rendering, and carries out stereomutation operation on 3D rendering, improves Consumer's Experience.
Refer to Fig. 6, Fig. 6 is the schematic flow sheet of the second embodiment of the method for a kind of augmented reality drawing that the embodiment of the present invention provides.The concrete refinement of the step S506 in the method that the augmented reality of method described by Fig. 5 that the augmented reality described in the present embodiment is drawn is drawn, comprises step:
S601, determine the touch size of described user at described touch location.
Wherein, mobile terminal determines the touch size of user at touch location.Namely mobile terminal detects the touch size of user at touch location by the touch sensor of himself.
S602, increase the volume size of described target object in outwardly convex mode in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of described target object according to described touch size.
Wherein, mobile terminal increases the volume size of target object in outwardly convex mode in the pre-set dimension magnitude range of the touch location of the user of the surf zone of target object at the touch size of touch location according to user.Wherein, pre-set dimension size can be a value range, as: the contact area of user has a higher limit, and the pre-set dimension magnitude range within higher limit can be directly proportional at the touch size of touch location to user.
As a kind of possible embodiment, suppose that mobile terminal determines the touch size of user at described touch location, the volume size of target object is increased in outwardly convex mode in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of target object according to touching size, make the region of stereomutation become the first volume and increase region, afterwards, mobile terminal detects the touch location of user again, under this has several situation to need specific descriptions: 1, if in the surf zone that mobile terminal detects the target object of the touch location of user in the 3D rendering that preset picture is corresponding again and outside the region in the first stereomutation region, the volume size of target object is increased according to touching the touch size according to user of size at the touch location of the user of the surf zone of target object, form the second volume and increase region, if in the surf zone that 2 mobile terminals detect the target object of the touch location of user in the 3D rendering that preset picture is corresponding again and when the first volume increases the inward flange position of the region in region, the volume size of target object is increased according to touching the touch size according to user of size at the touch location of the user of the surf zone of target object, form the second volume and increase region, wherein, the second volume increases region can cover the first volume increase region corresponding part in the part in the first original volume increase region.In addition, mobile terminal can determine the touch time span of user at touch location, the volume size of described target object is increased in outwardly convex mode in the pre-set dimension magnitude range of the touch location of the user of the surf zone of target object according to touching time span, concrete grammar is identical with said method, does not repeat them here.
The embodiment of the present invention determines the touch size of described user at described touch location; Increase the volume size of described target object in outwardly convex mode in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of described target object according to described touch size.Adopt the embodiment of the present invention can according to the volume size of the touch time span adjustment aim object of the touch size of user or user.
Refer to Fig. 7, Fig. 7 is the schematic flow sheet of the 7th embodiment of the method for a kind of augmented reality drawing that the embodiment of the present invention provides.The concrete refinement of the step S506 in the method that the augmented reality of method described by Fig. 1 that the augmented reality described in the present embodiment is drawn is drawn, comprises step:
S701, determine the touch time span of described user at described touch location.
Wherein, mobile terminal determines the touch time span of user at touch location.Namely mobile terminal detects user at touch location by the touch sensor of himself, and passes through the touch time span of clock log user at touch location of mobile terminal.
S702, reduce the volume size of described target object in the mode of caving inward in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of described target object according to described touch time span.
Wherein, mobile terminal increases the volume size of target object in outwardly convex mode in the pre-set dimension magnitude range of the touch location of the user of the surf zone of target object in the touch time span of touch location according to user.Wherein, pre-set dimension size can be a value range, as: the contact area of user has a higher limit, and the pre-set dimension magnitude range within higher limit can be directly proportional in the touch time span of touch location to user.Its specific implementation process and the method described by Fig. 6 similar.
As a kind of possible embodiment, suppose that mobile terminal have selected volume and increases instrument, determine the touch size of user at described touch location, the volume size of target object is increased in outwardly convex mode in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of target object according to touching size, make the region of stereomutation become the first volume and increase region, afterwards, mobile terminal have selected volume and reduces instrument, mobile terminal detects the touch location of user again, under this has several situation to need specific descriptions: 1, if in the surf zone that mobile terminal detects the target object of the touch location of user in the 3D rendering that preset picture is corresponding again and outside the region in the first stereomutation region, according to touching the volume size of size at the minimizing of the touch size according to the user target object of the touch location of the user of the surf zone of target object, form the second volume and reduce region, if in the surf zone that 2 mobile terminals detect the target object of the touch location of user in the 3D rendering that preset picture is corresponding again and when the first volume increases the inward flange position of the region in region, according to touching the volume size of size at the minimizing of the touch size according to the user target object of the touch location of the user of the surf zone of target object, form the second volume and reduce region, wherein, the second volume reduces region can cover the first volume increase region corresponding part in the part in the first original volume increase region.In addition, mobile terminal can determine the touch time span of user at touch location, according to the volume size touching time span and to reduce with outside concave manner in the pre-set dimension magnitude range of the touch location of the user of the surf zone of target object described target object, concrete grammar is identical with said method, does not repeat them here.
The embodiment of the present invention determines the touch time span of described user at described touch location; Reduce the volume size of described target object in the mode of caving inward in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of described target object according to described touch time span.Adopt the embodiment of the present invention can according to the volume size of the touch time span adjustment aim object of the touch size of user or user.
Refer to Fig. 8 a, Fig. 8 a is the structural representation of the first embodiment of a kind of mobile terminal that the embodiment of the present invention provides.Mobile terminal described in the present embodiment, specific as follows:
Recognition unit 801, for identifying the target image of finding a view in picture that the camera of mobile terminal photographs.
Wherein, the target image of finding a view in picture that photographs of the camera of recognition unit 801 identifiable design mobile terminal.Further, the target image of finding a view in picture that recognition unit 801 can adopt the camera of preset algorithm to mobile terminal to photograph identifies, as: contours extract can be carried out to picture of finding a view, to realize identifying the profile of target image, or, extraction Corner Feature can be carried out to picture of finding a view, to realize identifying etc. the Corner Feature of target image, identify with the target image of finding a view in picture realized camera photographs.
As a kind of possible embodiment, after mobile terminal opens the relevant software of augmented reality drawing by user, recognition unit 801 identifies the target image of finding a view in picture that the camera of mobile terminal photographs.
Judging unit 802, whether the similarity for find a view target image in picture and the preset picture that judge that described camera photographs is more than or equal to predetermined threshold value.
Wherein, whether the target image that what judging unit 802 can judge that its camera photographs find a view in picture and the similarity of preset picture are more than or equal to predetermined threshold value, mobile terminal can extract the Corner Feature information of the target image of finding a view in picture and the Corner Feature information of preset picture, the Corner Feature information of target image and the Corner Feature information of preset picture are carried out contrast concrete, specifically can see the embodiment of the present invention described by Fig. 2.
Further, preset picture can include but are not limited to: Quick Response Code or general pattern.Predetermined threshold value can be set to the constant fixed as: 0.8 ~ 1, also can arrange according to concrete environment, the method of ambient As threshold value as concrete in a kind of basis can be, first define: brightness value selects common define method to define: 0 for black, 255 is white, changes between the scope 0 ~ 255 of brightness value.In addition, under ecotopia, the target image that what camera photographed find a view in picture and the similarity of preset picture are 1, and the target image of finding a view in picture that namely camera photographs is the same with the number of the feature point number of preset picture, and contour similarity is the same; Then, obtain the mean flow rate AL (average luminance, AL) of the picture place environment of finding a view of camera, as: mean flow rate AL is more than or equal to 0, and time mean flow rate AL is less than or equal to 50, predetermined threshold value can be defined as 0.5; Mean flow rate AL is greater than 50, and time mean flow rate AL is less than or equal to 100, predetermined threshold value can be defined as 0.6; Mean flow rate is that AL is greater than 100, and time mean flow rate AL is less than or equal to 125, predetermined threshold value can be defined as 0.8; Mean flow rate AL is greater than 125, and time mean flow rate AL is less than or equal to 255, predetermined threshold value can be defined as 0.5.
As a kind of possible embodiment, predetermined threshold value can be arranged more accurately, when namely predetermined threshold value being arranged, not only can consider the mean flow rate of environment, the distance between camera and the center of target image can also be further considered, or, the corner dimension between camera and target image, this set-up mode is comparatively complicated, is not repeated herein.
Display unit 803, if the similarity of find a view target image in picture and the preset picture that photograph for described camera is more than or equal to predetermined threshold value, the display interface of described mobile terminal generates the 3D rendering corresponding with described preset picture.
Wherein, if the similarity of find a view target image in picture and preset picture that the camera of mobile terminal photographs is more than or equal to predetermined threshold value, display unit 803 generates the 3D rendering corresponding with preset picture on the display interface of mobile terminal.Further, preset picture can be multiple, often open preset picture all with corresponding 3D rendering.
As a kind of possible embodiment, the camera of mobile terminal is facing to target image, when mobile terminal rotates, as long as when the camera of mobile terminal photographs target image, the 3D rendering that preset picture is corresponding appears in the display interface of mobile terminal all the time, when mobile terminal moves, the target object in 3D rendering also can along with rotation of mobile terminal.
As a kind of possible embodiment, the camera of mobile terminal can face toward target image always, and when the camera of mobile terminal leaves target image, the 3D rendering of the display interface display of mobile terminal disappears.
As a kind of possible embodiment, after mobile terminal can generate the 3D rendering corresponding with described preset picture on its display interface, the camera of mobile terminal leaves the target image of finding a view in picture.
First acquiring unit 804, for obtaining the first color instruction that user selects.
Wherein, first acquiring unit 804 obtains the first color instruction that user selects, wherein, can generate the first corresponding color getting the first color instruction, the first color can include but are not limited to: blueness, white, black, purple, green, yellow, orange, pink colour, coffee color, local tyrant's gold, iris etc.User by mobile terminal open augmented reality draw related software after, the display interface of mobile terminal is found the work of corresponding drawing color, when user selects blueness, when namely user hits blue option, user target image in the 3 d image can describe out blueness.When user clicks blue option again, user can remove blueness, and user can select other colors again.
Detecting unit 805, for detecting the touch location of user.
Wherein, detecting unit 805 is by the touch location of the touch induction device detection user of mobile terminal, or mobile terminal can detect the touch location that user uses felt pen to click.
Generation unit 806, for if the target object of the touch location of described user in the 3D rendering that described preset picture is corresponding detected surf zone in, generate first color block identical with described first color according to the pre-set dimension magnitude range of the touch location of the described user of the first predetermined manner in the surf zone of described target object.
Wherein, if detect in the surf zone of the target object of the touch location of user in the 3D rendering that preset picture is corresponding, generation unit 806 generates first color block corresponding with described first color instruction according to the pre-set dimension magnitude range of the touch location of the user of the first predetermined manner in the surf zone of described target object.Wherein, the color of the first color block is the same with the first color.Further, the first predetermined manner can be according to detecting that the time span of user at touch location generates the first color block of pre-set dimension magnitude range.Such as, time span is longer, and the area of the first color block of generation is larger, wherein, can arrange the upper limit to pre-set dimension scope.Further, the first predetermined manner can be according to detecting that the touch size of user at touch location generates the first color block of pre-set dimension magnitude range.Such as, touch area larger, the area of the first color block of generation is larger.
Described display unit 803, also on the display interface at described mobile terminal, display comprises the 3D rendering of described first color block.
Wherein, display unit 803 shows the 3D rendering comprising the first color block on its display interface.
As a kind of possible embodiment, as as described in Fig. 8 b, mobile terminal can also comprise reduction unit 807, after display unit 803 shows the 3D rendering comprising described first color block on the display interface of mobile terminal, when getting the reduction instruction of user's input, the 3D rendering after described adjustment can be reduced into described corresponding with described preset picture 3D rendering by reduction unit 807.
As a kind of possible embodiment, as shown in Figure 8 c, mobile terminal can also comprise: second acquisition unit 808 and elimination unit 809, specific as follows:
Second acquisition unit 808, for obtaining the elimination instruction that user selects.
Eliminate unit 809, if detect that the touch location of described user is in the regional extent of described first color block for described detecting unit 805, eliminate the first color of the touch location of the described user in described first color block according to the second predetermined manner.
Further, the second predetermined manner can be: eliminate unit 809 eliminates the touch location of the user in the first color block at the touch size of touch location the first color according to user; Or the second predetermined manner can be: mobile terminal eliminates the first color of the touch location of the user in the first color block in the touch time span of touch location according to user.Mobile terminal is when getting the elimination instruction that user selects, the first color of the touch location of the user in the first color block can be eliminated in the touch size of touch location or the touch time span of user according to user, concrete original and mobile terminal is when getting the first color instruction that user selects, the principle generating the first color block is identical, does not repeat them here.
As a kind of possible embodiment, as shown in figure 8d, mobile terminal can also comprise: the 3rd acquiring unit 810, and for obtaining the 3D rendering stereomutation instruction that user selects, described 3D rendering stereomutation instruction is that volume increases instruction or volume reduces instruction; Adjustment unit 811, if for detecting in the surf zone of the target object of the touch location of described user in the 3D rendering that described preset picture is corresponding at described detecting unit, adjust the volume of described target object according to the pre-set dimension magnitude range of the touch location of the described user of the 3rd predetermined manner in the surf zone of described target object.
As a kind of possible embodiment, mobile terminal can also comprise above-mentioned all unit, does not repeat them here.
Embodiment of the present invention recognition unit 801 identifies the target image of finding a view in picture that the camera of mobile terminal photographs; Whether the target image that what judging unit 802 judged that described camera photographs find a view in picture and the similarity of preset picture are more than or equal to predetermined threshold value; If the target image that what described camera photographed find a view in picture and the similarity of preset picture are more than or equal to predetermined threshold value, display unit 803 generates the 3D rendering corresponding with described preset picture on the display interface of described mobile terminal; Acquiring unit 804 obtains the first color that user selects; Detecting unit 805 detects the touch location of user; If detect in the surf zone of the target object of the touch location of described user in the 3D rendering that described preset picture is corresponding, generation unit 806 generates first color block identical with described first color according to the pre-set dimension magnitude range of the touch location of the described user of the first predetermined manner in the surf zone of described target object; Display unit 803 shows the 3D rendering comprising described first color block on the display interface of described mobile terminal.Adopt the embodiment of the present invention to improve and strengthen the success ratio that operation interface is carried out in drawing, and improve Consumer's Experience.
Refer to Fig. 9, Fig. 9 is the structural representation of the second embodiment of a kind of mobile terminal that the embodiment of the present invention provides.The concrete refinement of the judging unit 802 in the mobile terminal of the mobile terminal described in the present embodiment described by Fig. 8, comprising: extraction unit 901 and comparing unit 902, specific as follows:
Extraction unit 901, for the characteristic information of find a view target image in picture and the described preset picture that extract that described camera photographs.
In, mobile terminal extracts the characteristic information of find a view target image in picture and the preset picture that camera photographs.Characteristic information can include but are not limited to: profile information, angle point information, characteristic color, target identification (antifalsification label, bar code, Quick Response Code, tag marking etc.).Such as, mobile terminal can adopt the find a view target image in picture and the Corner Feature information in preset picture that harris Angular Point Extracting Method extraction camera photographs.
Comparing unit 902, whether the characteristic information of the target image in picture of finding a view photographed for more described camera and the ratio of the characteristic information of described preset picture are more than or equal to described predetermined threshold value.
Wherein, whether the ratio that mobile terminal compares the characteristic information of the target image in picture of finding a view that camera photographs and the characteristic information of preset picture is more than or equal to described predetermined threshold value.The ratio of the characteristic information of the target image in picture of finding a view that mobile terminal can photograph camera and the characteristic information of preset picture compares, as the angle point number in: the angle point number of the target image of finding a view in picture that the camera extracted can be photographed and preset picture carries out asking ratio.
The target image that what embodiment of the present invention extraction unit 901 extracted that described camera photographs find a view in picture and the characteristic information of described preset picture; Whether the ratio of what the more described camera of comparing unit 902 photographed the find a view characteristic information of the target image in picture and the characteristic information of described preset picture is more than or equal to described predetermined threshold value.Adopt the embodiment of the present invention can to arrange the predetermined threshold value that the similarity between target image and preset picture judges, with the similarity of find a view target image in picture and the preset picture that compare that camera absorbs.
Refer to Figure 10, Figure 10 is the structural representation of the 3rd embodiment of a kind of mobile terminal that the embodiment of the present invention provides.The concrete refinement of the generation unit 806 in the mobile terminal of the mobile terminal described in the present embodiment described by Fig. 5, comprising: the first determining unit 1001 and generation subelement 1002, specific as follows:
Determining unit 1001, for determining the touch size of described user at described touch location.
Wherein, the first determining unit 1001 determines the touch size of user at touch location.Namely determining unit 1001 detects the touch size of user at touch location by the touch sensor of himself.
Generate subelement 1002, generate first color block identical with described first color according to described touch size in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of described target object.
Wherein, generate subelement 1002 can according to touch size the pre-set dimension magnitude range of the touch location of the user of the surf zone of target object generate the first color instruction corresponding the first color block.Touch area larger, the first color block of generation is larger.
As a kind of possible embodiment, first determining unit 1001 determines the touch time span of user at touch location, generates subelement 1002 and generates first color block identical with described first color according to touching the pre-set dimension magnitude range of time span at the touch location of the user of the surf zone of target object.
As a kind of possible embodiment, first determining unit 1001 determines the touch size of user at described touch location, generate subelement 1002 and according to touching after the pre-set dimension magnitude range of size at the touch location of the user of the surf zone of target object generate the first color block corresponding with the first color instruction, mobile terminal detects the touch location of user again, under this has several situation to need specific descriptions: 1, if in the surf zone that mobile terminal detects the target object of the touch location of user in the 3D rendering that preset picture is corresponding again and outside the region of the first color block, first color block corresponding with the first color instruction is generated according to touching the touch size according to user of size at the touch location of the user of the surf zone of target object, if in the surf zone that 2 mobile terminals detect the target object of the touch location of user in the 3D rendering that preset picture is corresponding again and when the inward flange position of the region of the first color block, new first color block corresponding with the first color instruction is generated according to touching the touch size according to user of size at the touch location of the user of the surf zone of target object, wherein, the first new color block can cover the corresponding part of the first color block in the part of the first original color block inside.Further, user can select the second color instruction, and when the surf zone of the touch location of user at target object being detected, the size according to the touch area of user generates second color block corresponding with the second color instruction, concrete grammar is identical with said method, does not repeat them here.
As a kind of possible implementation method, determining the touch time span of user at described touch location, generating first color block identical with described first color according to touching the pre-set dimension magnitude range of time size at the touch location of the user of the surf zone of target object.Specific implementation and above-mentioned expressed touch size similar, do not repeat them here.
Embodiment of the present invention determining unit 701 determines the touch size of described user at described touch location; Generate subelement 702 and generate first color block identical with described first color according to described touch size in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of described target object.Adopt the embodiment of the present invention can according to the size of the touch time span determination color block of the touch area of user or user.
Refer to Figure 11, Figure 11 is the structural representation of the 4th embodiment of a kind of mobile terminal that the embodiment of the present invention provides.The concrete refinement of the adjustment unit 811 in the mobile terminal of the mobile terminal described in the present embodiment described by Fig. 8 d, comprising: the second determining unit 1101 and increase unit 1102, specific as follows:
Second determining unit 1101, for determining the touch size of described user at described touch location.
Wherein, the second determining unit 1101 determines the touch size of user at touch location.Namely the second determining unit 1101 detects the touch size of user at touch location by the touch sensor of himself.
Increase unit 1102, for increasing the volume size of described target object in outwardly convex mode in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of described target object according to described touch size.
Wherein, unit 1102 can increase target object in outwardly convex mode in the pre-set dimension magnitude range of the touch location of the user of the surf zone of target object at the touch size of touch location volume size according to user is increased.Wherein, pre-set dimension size can be a value range, as: the contact area of user has a higher limit, and the pre-set dimension magnitude range within higher limit can be directly proportional at the touch size of touch location to user.
As a kind of possible embodiment, suppose that the second determining unit 1101 determines the touch size of user at described touch location, increase unit 1102 and to increase the volume size of target object in outwardly convex mode in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of target object according to touching size, make the region of stereomutation become the first volume and increase region, afterwards, mobile terminal detects the touch location of user again, under this has several situation to need specific descriptions: 1, if in the surf zone that mobile terminal detects the target object of the touch location of user in the 3D rendering that preset picture is corresponding again and outside the region in the first stereomutation region, the volume size of target object is increased according to touching the touch size according to user of size at the touch location of the user of the surf zone of target object, form the second volume and increase region, if in the surf zone that 2 mobile terminals detect the target object of the touch location of user in the 3D rendering that preset picture is corresponding again and when the first volume increases the inward flange position of the region in region, the volume size of target object is increased according to touching the touch size according to user of size at the touch location of the user of the surf zone of target object, form the second volume and increase region, wherein, the second volume increases region can cover the first volume increase region corresponding part in the part in the first original volume increase region.In addition, mobile terminal can determine the touch time span of user at touch location, the volume size of described target object is increased in outwardly convex mode in the pre-set dimension magnitude range of the touch location of the user of the surf zone of target object according to touching time span, concrete grammar is identical with said method, does not repeat them here.
The embodiment of the present invention second determining unit 1101 determines the touch size of described user at described touch location; Increase unit 1102 increases described target object in outwardly convex mode in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of described target object volume size according to described touch size.Adopt the embodiment of the present invention can according to the volume size of the touch time span adjustment aim object of the touch size of user or user.
Refer to Figure 12, Figure 12 is the structural representation of the 3rd embodiment of a kind of mobile terminal that the embodiment of the present invention provides.The concrete refinement of the adjustment unit 811 in the mobile terminal of the mobile terminal described in the present embodiment described by Fig. 8 d, comprising: the 3rd determining unit 1201 and minimizing unit 1202, specific as follows:
3rd determining unit 1201, for determining the touch time span of described user at described touch location.
Wherein, the 3rd determining unit 1201 determines the touch time span of user at touch location.Namely the 3rd determining unit 1201 detects user at touch location by the touch sensor of himself, and passes through the touch time span of clock log user at touch location of mobile terminal.
Reduce unit 702, reduce the volume size of described target object according to described touch time span in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of described target object in the mode of caving inward.
Wherein, mobile terminal increases the volume size of target object in outwardly convex mode in the pre-set dimension magnitude range of the touch location of the user of the surf zone of target object in the touch time span of touch location according to user.Wherein, pre-set dimension size can be a value range, as: the contact area of user has a higher limit, and the pre-set dimension magnitude range within higher limit can be directly proportional in the touch time span of touch location to user.Its specific implementation process and the method described by Fig. 6 similar.
As a kind of possible embodiment, suppose that mobile terminal have selected volume and increases instrument, determine the touch size of user at described touch location, the volume size of target object is increased in outwardly convex mode in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of target object according to touching size, make the region of stereomutation become the first volume and increase region, afterwards, mobile terminal have selected volume and reduces instrument, mobile terminal detects the touch location of user again, under this has several situation to need specific descriptions: 1, if in the surf zone that mobile terminal detects the target object of the touch location of user in the 3D rendering that preset picture is corresponding again and outside the region in the first stereomutation region, according to touching the volume size of size at the minimizing of the touch size according to the user target object of the touch location of the user of the surf zone of target object, form the second volume and reduce region, if in the surf zone that 2 mobile terminals detect the target object of the touch location of user in the 3D rendering that preset picture is corresponding again and when the first volume increases the inward flange position of the region in region, according to touching the volume size of size at the minimizing of the touch size according to the user target object of the touch location of the user of the surf zone of target object, form the second volume and reduce region, wherein, the second volume reduces region can cover the first volume increase region corresponding part in the part in the first original volume increase region.In addition, mobile terminal can determine the touch time span of user at touch location, according to the volume size touching time span and to reduce with outside concave manner in the pre-set dimension magnitude range of the touch location of the user of the surf zone of target object described target object, concrete grammar is identical with said method, does not repeat them here.
The embodiment of the present invention first determining unit 1201 determines the touch size of described user at described touch location; Reduce unit 1202 increases described target object in outwardly convex mode in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of described target object volume size according to described touch size.Adopt the embodiment of the present invention can according to the volume size of the touch time span adjustment aim object of the touch size of user or user.
Refer to Figure 13, the structural representation of the 4th embodiment of a kind of mobile terminal that Figure 13 provides for the embodiment of the present invention.Mobile terminal described in the present embodiment comprises: at least one input equipment 1000; At least one output device 2000; At least one processor 3000, such as CPU; With storer 4000, above-mentioned input equipment 1000, output device 2000, processor 3000 are connected by bus 5000 with storer 4000.
Wherein, above-mentioned input equipment 1000 can be contact panel, common PC, liquid crystal display, touch screen, push button etc.
Above-mentioned storer 4000 can be high-speed RAM storer, also can be non-labile storer (non-volatile memory), such as magnetic disk memory.Above-mentioned storer 4000 is for storing batch processing code, and above-mentioned input equipment 1000, output device 2000 and processor 3000, for calling the program code stored in storer 4000, perform and operate as follows:
Above-mentioned processor 3000, for identifying the target image of finding a view in picture that the camera of mobile terminal photographs;
Above-mentioned processor 3000, also for:
Whether the similarity of find a view target image in picture and the preset picture that judge that described camera photographs is more than or equal to predetermined threshold value;
Above-mentioned processor 3000, also for:
If the target image that what described camera photographed find a view in picture and the similarity of preset picture are more than or equal to predetermined threshold value, the display interface of described mobile terminal generates the 3D rendering corresponding with described preset picture
Above-mentioned input equipment 1000, for obtaining the first color instruction that user selects;
Above-mentioned processor 3000, also for:
Detect the touch location of user;
Above-mentioned processor 3000, also for:
If detect in the surf zone of the target object of the touch location of described user in the 3D rendering that described preset picture is corresponding, generate first color block corresponding with described first color instruction according to the pre-set dimension magnitude range of the touch location of the described user of the first predetermined manner in the surf zone of described target object;
Above-mentioned processor 3000, also for:
On the display interface of described mobile terminal, display comprises the 3D rendering of described first color block.
In the embodiment that some are feasible, above-mentioned processor 3000 also for:
The characteristic information of find a view target image in picture and the described preset picture that extract that described camera photographs;
Whether the ratio of what more described camera photographed the find a view characteristic information of the target image in picture and the characteristic information of described preset picture is more than or equal to described predetermined threshold value.
In the embodiment that some are feasible, above-mentioned processor 3000 also for:
Determine the touch size of described user at described touch location;
First color block corresponding with described first color instruction is generated in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of described target object according to described touch size.
In the embodiment that some are feasible, above-mentioned processor 3000 also for:
According to after the color block that the touch location display of the described user of predetermined manner in the surf zone of described target object is identical with the color of the color type that described user selects, also comprise:
Obtain the elimination instruction that user selects;
Detect the touch location of user;
If detect, the touch location of described user is in the regional extent of described first color block, eliminates the first color of the touch location of the described user in described first color block according to the second predetermined manner.
In the embodiment that some are feasible, above-mentioned processor 3000 also for:
After on the display interface of mobile terminal, display comprises the 3D rendering of described first color block, also comprise:
When getting the reduction instruction of user's input, the 3D rendering after described adjustment is reduced into described corresponding with described preset picture 3D rendering.
In the embodiment that some are feasible, above-mentioned processor 3000 also for:
Obtain the 3D rendering stereomutation instruction that user selects, described 3D rendering stereomutation instruction is that volume increases instruction or volume reduces instruction;
Detect the touch location of user;
If detect in the surf zone of the target object of the touch location of described user in the 3D rendering that described preset picture is corresponding, adjust the volume of described target object according to the pre-set dimension magnitude range of the touch location of the described user of the 3rd predetermined manner in the surf zone of described target object;
The display interface of described mobile terminal shows the 3D rendering after adjustment.
In the embodiment that some are feasible, above-mentioned processor 3000 also for:
Determine the touch size of described user at described touch location;
Increase the volume size of described target object in outwardly convex mode in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of described target object according to described touch size.
In the embodiment that some are feasible, above-mentioned processor 3000 also for:
Determine the touch time span of described user at described touch location;
Reduce the volume size of described target object in the mode of caving inward in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of described target object according to described touch time span.
In specific implementation, the implementation described in each embodiment of the method that the augmented reality described in Fig. 1 ~ Fig. 4 that the input equipment 1000 described in the embodiment of the present invention, output device 2000 and processor 3000 can perform the embodiment of the present invention to be provided is drawn, also can perform the implementation of the mobile terminal described in the first embodiment ~ the 3rd embodiment of the mobile terminal described in Fig. 5 ~ Fig. 7 that the embodiment of the present invention provides, not repeat them here.
Module in all embodiments of the embodiment of the present invention or submodule, universal integrated circuit can be passed through, such as CPU (Central Processing Unit, central processing unit), or realized by ASIC (Application SpecificIntegrated Circuit, special IC).
Step in embodiment of the present invention method can be carried out order according to actual needs and be adjusted, merges and delete.
Unit in embodiment of the present invention device can carry out merging, divide and deleting according to actual needs.
One of ordinary skill in the art will appreciate that all or part of flow process realized in above-described embodiment method, that the hardware that can carry out instruction relevant by computer program has come, described program can be stored in a computing machine and can spend and get in storage medium, this program, when performing, can comprise the flow process of the embodiment as above-mentioned each side method.Wherein, described storage medium can be magnetic disc, CD, read-only store-memory body (Read-Only Memory, ROM) or random access memory (Random Access Memory is called for short RAM) etc.
Above disclosedly be only embodiment of the present invention preferred embodiment, certainly can not limit the interest field of the embodiment of the present invention with this, therefore according to the equivalent variations that embodiment of the present invention claim is done, still belong to the scope that the embodiment of the present invention contains.

Claims (10)

1. a method for augmented reality drawing, is characterized in that, comprising:
Identify the target image of finding a view in picture that the camera of mobile terminal photographs;
Whether the similarity of find a view target image in picture and the preset picture that judge that described camera photographs is more than or equal to predetermined threshold value;
If the target image that what described camera photographed find a view in picture and the similarity of preset picture are more than or equal to predetermined threshold value, the display interface of described mobile terminal generates the 3D rendering corresponding with described preset picture;
Obtain the first color instruction that user selects;
Detect the touch location of user;
If detect in the surf zone of the target object of the touch location of described user in the 3D rendering that described preset picture is corresponding, generate first color block corresponding with described first color instruction according to the pre-set dimension magnitude range of the touch location of the described user of the first predetermined manner in the surf zone of described target object;
On the display interface of described mobile terminal, display comprises the 3D rendering of described first color block.
2. the method for claim 1, it is characterized in that, the pre-set dimension magnitude range of the described touch location according to the described user of the first predetermined manner in the surf zone of described target object generates first color block corresponding with described first color instruction, comprising:
Determine the touch size of described user at described touch location;
First color block corresponding with described first color instruction is generated in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of described target object according to described touch size.
3. the method for claim 1, is characterized in that, after the described 3D rendering that generation is corresponding with described preset picture on the display interface of described mobile terminal, described method also comprises;
Obtain the 3D rendering stereomutation instruction that user selects, described 3D rendering stereomutation instruction is that volume increases instruction or volume reduces instruction;
Detect the touch location of user;
If detect in the surf zone of the target object of the touch location of described user in the 3D rendering that described preset picture is corresponding, adjust the volume of described target object according to the pre-set dimension magnitude range of the touch location of the described user of the 3rd predetermined manner in the surf zone of described target object;
The display interface of described mobile terminal shows the 3D rendering after adjustment.
4. method as claimed in claim 3, it is characterized in that, when the 3D rendering stereomutation instruction that described acquisition user selects is described volume increase instruction, the described volume size adjusting described target object according to predetermined manner in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of described target object, comprising:
Determine the touch size of described user at described touch location;
Increase the volume size of described target object in outwardly convex mode in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of described target object according to described touch size.
5. method as claimed in claim 3, it is characterized in that, when the 3D rendering stereomutation instruction that described acquisition user selects is described volume minimizing instruction, the described volume adjusting described target object according to predetermined manner in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of described target object, comprising:
Determine the touch time span of described user at described touch location;
Reduce the volume size of described target object in the mode of caving inward in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of described target object according to described touch time span.
6. a mobile terminal, is characterized in that, comprising:
Recognition unit, for identifying the target image of finding a view in picture that the camera of mobile terminal photographs;
Judging unit, whether the similarity for find a view target image in picture and the preset picture that judge that described camera photographs is more than or equal to predetermined threshold value;
Display unit, if the similarity of find a view target image in picture and the preset picture that photograph for described camera is more than or equal to predetermined threshold value, the display interface of described mobile terminal generates the 3D rendering corresponding with described preset picture;
First acquiring unit, for obtaining the first color instruction that user selects;
Detecting unit, for detecting the touch location of user;
Generation unit, for if the target object of the touch location of described user in the 3D rendering that described preset picture is corresponding detected surf zone in, generate the first corresponding with described first color instruction color block according to the pre-set dimension magnitude range of the touch location of the described user of the first predetermined manner in the surf zone of described target object;
Described display unit, also on the display interface at described mobile terminal, display comprises the 3D rendering of described first color block.
7. mobile terminal as claimed in claim 6, it is characterized in that, described generation unit also comprises:
First determining unit, for determining the touch size of described user at described touch location;
Generate subelement, for generating first color block corresponding with described first color instruction according to described touch size in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of described target object.
8. mobile terminal as claimed in claim 6, it is characterized in that, described mobile terminal also comprises;
3rd acquiring unit, for obtaining the 3D rendering stereomutation instruction that user selects, described 3D rendering stereomutation instruction is that volume increases instruction or volume reduces instruction;
Adjustment unit, if for detecting in the surf zone of the target object of the touch location of described user in the 3D rendering that described preset picture is corresponding at described detecting unit, adjust the volume of described target object according to the pre-set dimension magnitude range of the touch location of the described user of the 3rd predetermined manner in the surf zone of described target object.
9. mobile terminal as claimed in claim 8, described adjustment unit comprises:
Second determining unit, for determining the touch size of described user at described touch location;
Increase unit, for increasing the volume size of described target object in outwardly convex mode in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of described target object according to described touch size.
10. mobile terminal as claimed in claim 8, it is characterized in that, described mobile terminal also comprises:
3rd determining unit, for determining the touch time span of described user at described touch location;
Reduce unit, for reducing the volume size of described target object in the mode of caving inward in the pre-set dimension magnitude range of the touch location of the described user of the surf zone of described target object according to described touch time span.
CN201510121235.0A 2015-03-19 2015-03-19 The method and mobile terminal that a kind of augmented reality is drawn Active CN104766354B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510121235.0A CN104766354B (en) 2015-03-19 2015-03-19 The method and mobile terminal that a kind of augmented reality is drawn

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510121235.0A CN104766354B (en) 2015-03-19 2015-03-19 The method and mobile terminal that a kind of augmented reality is drawn

Publications (2)

Publication Number Publication Date
CN104766354A true CN104766354A (en) 2015-07-08
CN104766354B CN104766354B (en) 2018-05-04

Family

ID=53648159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510121235.0A Active CN104766354B (en) 2015-03-19 2015-03-19 The method and mobile terminal that a kind of augmented reality is drawn

Country Status (1)

Country Link
CN (1) CN104766354B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106066692A (en) * 2016-05-31 2016-11-02 北京木马智慧教育科技股份有限公司 A kind of interacting toys based on AR technology and construction method
CN106127859A (en) * 2016-06-28 2016-11-16 华中师范大学 A kind of mobile augmented reality type scribble is painted this sense of reality and is generated method
CN107767466A (en) * 2017-11-08 2018-03-06 霖晏全挚(上海)医疗科技有限公司 AR is imaged adjusting method
CN107909655A (en) * 2017-11-08 2018-04-13 刘洋 AR is imaged regulating system
CN110086982A (en) * 2019-03-12 2019-08-02 浙江艺旗教育科技有限公司 A kind of perspective image antidote and device
CN112041803A (en) * 2018-04-30 2020-12-04 三星电子株式会社 Electronic device and operation method thereof

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103038771A (en) * 2010-02-22 2013-04-10 耐克国际有限公司 Augmented reality design system
CN103391411A (en) * 2012-05-08 2013-11-13 索尼公司 Image processing apparatus, projection control method and program
US20140002472A1 (en) * 2012-06-29 2014-01-02 Disney Enterprises, Inc. Augmented reality surface painting
CN103606310A (en) * 2013-10-29 2014-02-26 中兴通讯股份有限公司 Teaching method and system
US20140104274A1 (en) * 2012-10-17 2014-04-17 Microsoft Corporation Grasping virtual objects in augmented reality
CN103970268A (en) * 2013-02-01 2014-08-06 索尼公司 Information processing device, client device, information processing method, and program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103038771A (en) * 2010-02-22 2013-04-10 耐克国际有限公司 Augmented reality design system
CN103391411A (en) * 2012-05-08 2013-11-13 索尼公司 Image processing apparatus, projection control method and program
US20140002472A1 (en) * 2012-06-29 2014-01-02 Disney Enterprises, Inc. Augmented reality surface painting
US20140104274A1 (en) * 2012-10-17 2014-04-17 Microsoft Corporation Grasping virtual objects in augmented reality
CN103970268A (en) * 2013-02-01 2014-08-06 索尼公司 Information processing device, client device, information processing method, and program
CN103606310A (en) * 2013-10-29 2014-02-26 中兴通讯股份有限公司 Teaching method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ADRIAN CLARK,ANDREAS DÜNSER,RAPHAEL GRASSET: "An Interactive Augmented Reality Coloring Book", 《IEEE INTERNATIONAL SYMPOSIUM ON MIXED & AUGMENTED REALITY》 *
ANTOINE PETIT,ERIC MARCHAND,KEYVAN KANANI: "Augmenting Markerless Complex 3D Objects By Combining Geometrical and Color Edge Information", 《IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106066692A (en) * 2016-05-31 2016-11-02 北京木马智慧教育科技股份有限公司 A kind of interacting toys based on AR technology and construction method
CN106127859A (en) * 2016-06-28 2016-11-16 华中师范大学 A kind of mobile augmented reality type scribble is painted this sense of reality and is generated method
CN106127859B (en) * 2016-06-28 2018-08-24 华中师范大学 A kind of mobile augmented reality type scribble paints the sense of reality generation method of sheet
CN107767466A (en) * 2017-11-08 2018-03-06 霖晏全挚(上海)医疗科技有限公司 AR is imaged adjusting method
CN107909655A (en) * 2017-11-08 2018-04-13 刘洋 AR is imaged regulating system
CN107767466B (en) * 2017-11-08 2021-04-23 刘洋 AR imaging adjusting method
CN107909655B (en) * 2017-11-08 2021-04-23 刘洋 AR imaging adjustment system
CN112041803A (en) * 2018-04-30 2020-12-04 三星电子株式会社 Electronic device and operation method thereof
CN110086982A (en) * 2019-03-12 2019-08-02 浙江艺旗教育科技有限公司 A kind of perspective image antidote and device

Also Published As

Publication number Publication date
CN104766354B (en) 2018-05-04

Similar Documents

Publication Publication Date Title
US11410277B2 (en) Method and device for blurring image background, storage medium and electronic apparatus
CN104766354A (en) Method for augmented reality drawing and mobile terminal
CN107613202B (en) Shooting method and mobile terminal
CN107977659B (en) Character recognition method and device and electronic equipment
US9405182B2 (en) Image processing device and image processing method
CN108664364B (en) Terminal testing method and device
KR101645994B1 (en) Detecting apparatus for charater recognition region and charater recognition method
CN106713696B (en) Image processing method and device
CN105046254A (en) Character recognition method and apparatus
CN110506252B (en) Terminal screen is fixed a position to transform relation based on mark figure point coordinate in pattern
WO2022077827A1 (en) Graph drawing method and apparatus, storage medium, and terminal device
US10269136B2 (en) System and method for processing depth images which capture an interaction of an object relative to an interaction plane
CN111914604A (en) Augmented reality display method for applying hair color to eyebrow
CN112396050B (en) Image processing method, device and storage medium
CN105389541A (en) Fingerprint image recognition method and device
CN111582257A (en) Method, device and system for detecting object to be detected
CN105654531B (en) Method and device for drawing image contour
CN112418214A (en) Vehicle identification code identification method and device, electronic equipment and storage medium
CN110431563A (en) The method and apparatus of image rectification
CN105824398A (en) Incoming call processing method and mobile terminal
CN110738185B (en) Form object identification method, form object identification device and storage medium
CN114627561A (en) Dynamic gesture recognition method and device, readable storage medium and electronic equipment
CN110086987B (en) Camera visual angle cutting method and device and storage medium
EP3435281B1 (en) Skin undertone determining method and an electronic device
US9704030B2 (en) Flesh color detection condition determining apparatus, and flesh color detection condition determining method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant