CN106095088B - A kind of electronic equipment and its image processing method - Google Patents
A kind of electronic equipment and its image processing method Download PDFInfo
- Publication number
- CN106095088B CN106095088B CN201610395033.XA CN201610395033A CN106095088B CN 106095088 B CN106095088 B CN 106095088B CN 201610395033 A CN201610395033 A CN 201610395033A CN 106095088 B CN106095088 B CN 106095088B
- Authority
- CN
- China
- Prior art keywords
- image
- display device
- acquisition
- input
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 10
- 238000012545 processing Methods 0.000 claims abstract description 83
- 238000009877 rendering Methods 0.000 claims abstract description 8
- 230000003321 amplification Effects 0.000 claims description 32
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 32
- 238000000034 method Methods 0.000 claims description 20
- 230000000007 visual effect Effects 0.000 claims description 11
- 238000010586 diagram Methods 0.000 description 20
- 230000006870 function Effects 0.000 description 19
- 230000033001 locomotion Effects 0.000 description 10
- 238000001514 detection method Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 238000013507 mapping Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 239000011800 void material Substances 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000001815 facial effect Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000002834 transmittance Methods 0.000 description 2
- 238000012512 characterization method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000037204 skin physiology Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012876 topography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention discloses a kind of electronic equipment, comprising: image collecting device, for acquiring the image in image acquisition region;Display device, for exporting image;Reflection unit, the virtual images in reflecting region, the reflecting region and described image pickup area have lap for rendering;Processing unit, for when meeting preset condition, the part of the image of processing described image acquisition device acquisition to obtain the local target image, controls the display device and show the target image;Wherein, the part corresponding one is located at the objective body of lap, and the size of the target image in a display device be greater than the objective body at virtual images size.Meanwhile the embodiment of the invention also discloses a kind of image processing methods.
Description
Technical field
The present invention relates to the information processing technologies, and in particular to a kind of electronic equipment and its image processing method.
Background technique
User generally requires to see the details of some concrete positions such as face, therefore the mirror that can usually move close to when looking in the mirror
To see more clearly.However, since plane mirror imaging and real-world object are equal in magnitude, it is no matter how close from mirror, it sees
The position size arrived is only as the size at true position, and user is not clearly visible the details at the position, greatly
The experience of user is reduced greatly.
Summary of the invention
In view of this, an embodiment of the present invention is intended to provide a kind of electronic equipment and its image processing methods, it at least can be full
When sufficient trigger condition display be greater than objective body in reflecting region at virtual images image, improve the usage experience of user.
In order to achieve the above objectives, the technical scheme of the present invention is realized as follows:
The embodiment of the invention provides a kind of electronic equipment, comprising:
Image collecting device, for acquiring the image in image acquisition region;
Display device, for exporting image;
Reflection unit, for rendering virtual images in reflecting region, the reflecting region and described image pickup area
There is lap;
Processing unit, for when meeting preset condition, the part of the image of processing described image acquisition device acquisition to be obtained
The local target image is obtained, the display device is controlled and shows the target image;
Wherein, the part corresponding one is located at the objective body of lap, and the target image is in a display device
Size be greater than the objective body at virtual images size.
The embodiment of the invention also provides a kind of image processing methods, comprising:
Acquire the image in image acquisition region;
When meeting preset condition, the part of described image is handled, obtains target image;
Control display device shows the target image;
Wherein, the part corresponding one is located at the objective body of lap, and the target image is in a display device
Size be greater than the objective body at virtual images size.
Using technical solution of the present invention, it can show when meeting trigger condition and be greater than objective body institute in reflecting region
At the image of virtual images, the usage experience of user is improved.
Detailed description of the invention
Fig. 1 is the composed structure schematic diagram one of electronic equipment provided in an embodiment of the present invention;
Fig. 2 is a kind of schematic diagram that reflecting region provided in an embodiment of the present invention and image acquisition region have lap;
Fig. 3 is image collecting device provided in an embodiment of the present invention, display device, one kind of the positional relationship of reflection unit
Schematic diagram;
Fig. 4 be image collecting device provided in an embodiment of the present invention, display device, the positional relationship of reflection unit it is another
Kind schematic diagram;
Fig. 5 is the composed structure schematic diagram two of electronic equipment provided in an embodiment of the present invention;
Fig. 6 is that provided in an embodiment of the present invention inputted according to user determines local a kind of schematic diagram;
Fig. 7 is that provided in an embodiment of the present invention inputted according to user determines the local another schematic diagram;
Fig. 8 for it is provided in an embodiment of the present invention at the virtual image corresponding position displaying target image schematic diagram;
Fig. 9 is a kind of schematic diagram that user provided in an embodiment of the present invention selects amplification threshold value by mirror image;
Figure 10 is that the target of determining user's selection provided in an embodiment of the present invention amplifies another schematic diagram of threshold value;
Figure 11 is the composed structure schematic diagram three of electronic equipment provided in an embodiment of the present invention;
Figure 12 is the implementation process schematic diagram of image processing method provided in an embodiment of the present invention.
Specific embodiment
The technical solution of the present invention is further elaborated in the following with reference to the drawings and specific embodiments.
Embodiment one
Fig. 1 is the composed structure schematic diagram one of electronic equipment provided in an embodiment of the present invention, as shown in Figure 1, the electronics
Equipment, comprising:
Image collecting device 11, for acquiring the image in image acquisition region;
Display device 12, for exporting image;
Reflection unit 13, for rendering virtual images in reflecting region, the reflecting region and described image acquisition zone
There is lap in domain;
Processing unit 14, for handling the office for the image that described image acquisition device 11 acquires when meeting preset condition
Portion obtains the local target image, controls the display device 12 and show the target image;
Wherein, the part corresponding one is located at the objective body of lap, and the target image is in display device 12
In size be greater than the objective body in reflection unit 13 at virtual images size.
Here, the objective body refers to a part of the first object.Wherein, first object can be people or animal,
Or instrument or the object (such as pen, cup) without vital signs, etc..
By shown in display device 12 part target image size be greater than objective body at virtual images
Size is greater than the corresponding virtual images of objective body in reflecting region in this way, can show, allows user to see corresponding greater than objective body
The image of virtual images observes the objective body convenient for user, or even result does certain processing operation according to the observation,
Improve the usage experience of user.
Optionally, the target image is greater than objective body institute in reflection unit 13 in the size in display device 12
It at the size of virtual images, can be realized, can also be realized by software form by example, in hardware.
For a kind of optional hardware implementation mode, the figure having a size of M × N is obtained by image collecting device 11
Picture;The processing unit 14 obtains part from described image, is locally determined as target image for described, the part includes mesh
Standard type, the size of the target image are m × n, wherein m < M, n < N;M × n the pixel is shown by display device 12
Target image;Wherein, in display device at the corresponding size of virtual images, i.e., greatly the size of m × n pixel is greater than objective body institute
In the display pixel being overlapped with the virtual image.The display pixel interval of the display device 12 itself is larger or the processing unit
14 notify the pixel at the selection of display device 12 interval to show target image.
For example, image collecting device 11 obtains one having a size of 100 × 100 facial image;The processing unit
14 intercept the corresponding image of nose from the facial image, it is assumed that the size of the corresponding image of nose is 20 × 20, passes through display and fills
The target image of 12 displays described 20 × 20 is set, for example the display pixel interval of display device 12 is larger, in display 20 × 20
When pixel, for the area of shared screen with regard to larger, then the size of the target image shown is greater than the corresponding size of virtual images.Again
For example, when showing 20 × 20 pixels, select the pixel being spaced on screen to the target images of 20 × 20 pixels into
The size of row display, the then target image shown is greater than the corresponding size of virtual images.
As a kind of optional software realization mode, image collecting device 11 obtains the image in image acquisition region;Place
The part that device 14 chooses the image that described image acquisition device 11 acquires is managed, processing is amplified to the part, obtains institute
Local target image is stated, the display device 12 is controlled and shows the target image.In this way, to show in display device 12
Show part target image size be greater than the corresponding objective body in the part at virtual images size, allow user to see
The image of virtual images corresponding greater than objective body.
Fig. 2 shows the schematic diagrames that there are lap in a kind of reflecting region and image acquisition region, as shown in Fig. 2, reflection
The range that region is included can be different from the range that image acquisition region is included, but reflecting region and image acquisition region
There may be certain laps, in this way, image collecting device can obtain when objective body is located in the lap
The virtual images of objective body can be also presented in the image of objective body, emitter.
Wherein, the reflection unit 13 is Chong Die with described image acquisition device 11, and the reflection unit 13 is positioned at described
In the image acquisition region of image collecting device 11, the acquisition of described image acquisition device 11 is not influenced;The reflection unit 13
It is Chong Die with the display device 12, and the reflection unit 13 is located in the sensing region of the display device 12, does not influence institute
The display for stating display device 12 is perceived.
Here, the overlapping can be understood as stacked in multi-layers, mutually covering;Or it is interpreted as occupying an object with another object
Same position simultaneously coexists therewith.
Here, above-mentioned overlapping is it is to be understood that the reflection unit 13 is located at the front of described image acquisition device 11,
The reflection unit 13 is located at the front of the display device 12.
Here, the display for not influencing the display device 12 is perceived, refer to the display device 12 in display,
Display information not because the reflection unit 13 there are due to be blocked, i.e., do not block viewing.
Preferably, the reflection unit 13 is made of semi-transparent semi-reflecting material, does not influence the display quilt of the display device 12
Perception.
It may be contacted between the emitter 13 and the display device 12, it is also possible to not contact;Between them
There is no electrical connections.
In a specific embodiment, described image acquisition device 11 is located on the same floor with the display device 12, described
Reflection unit 13 is Chong Die with described image acquisition device 11 and the display device 12, and the reflection unit 13 is located at the figure
As acquisition device 11 image acquisition region in, but do not influence the acquisition of described image acquisition device 11;The reflection unit 13
In the sensing region of the display device 12, the display for not influencing the display device 12 is perceived.
Fig. 3 shows a kind of schematic diagram of image collecting device 11, display device 12, the positional relationship of reflection unit 13,
As shown in Fig. 2, described image acquisition device 11 is located at the top middle position of display device 12, the two is in same plane;
For example, when the form of the electronic equipment is in TV form or all-in-one machine form, theoretically, since screen needs to keep complete
Property, institute's above section is frame, and camera is among frame;Lower half portion is screen;The reflection unit 13 is located at the figure
As acquisition device 11 and the outside of the display device 12, the reflection unit 13 is towards user and apart from user's
Distance is recently.The reflection unit 13 is located in the image acquisition region of described image acquisition device 11, but does not influence the figure
As the acquisition of acquisition device 11;The reflection unit 13 is located in the sensing region of the display device 12, does not influence described aobvious
The display of showing device 12 is perceived.
Fig. 4 shows another signal of image collecting device 11, display device 12, the positional relationship of reflection unit 13
Figure, here, it should be noted that, the three parts in figure are only illustrative expression, do not represent actual size.As shown in figure 3, from
Putting in order and be successively forward afterwards: image collecting device 11, display device 12, reflection unit 13.Wherein, image collecting device
11 is Chong Die with display device 12, and display device 12 is located in the image acquisition region of described image acquisition device 11, but does not influence
The acquisition of described image acquisition device 11;The reflection unit 13 is located at the image acquisition region of described image acquisition device 11
It is interior, but the acquisition of described image acquisition device 11 is not influenced.Reflection unit 13 is Chong Die with display device 12;The reflection unit 13
In the sensing region of the display device 12, the display for not influencing the display device 12 is perceived.
Optionally, display device 12 be have the display device of certain light transmittance, such as Organic Light Emitting Diode (OLED,
Organic Light-Emitting Diode) display.Alternatively, display device 12 is the display with light transmission state, than
The display that switches such as between display/transparent.
Electronic equipment described in the present embodiment handles the part of acquired image when meeting preset condition, described in acquisition
The target image of part, and show the target image, in this way, can show and be greater than in reflecting region when meeting trigger condition
Objective body at virtual images image, allow user to see the image of virtual images corresponding greater than objective body.
Embodiment two
Fig. 5 is the composed structure schematic diagram two of electronic equipment provided in an embodiment of the present invention, as shown in figure 5, the electronics
Equipment, comprising:
Image collecting device 11, for acquiring the image in image acquisition region;
Display device 12, for exporting image;
Reflection unit 13, for rendering virtual images in reflecting region, the reflecting region and described image acquisition zone
There is lap in domain;
Acquisition device 15 is inputted, for acquiring the input of user;
Processing unit 14, for judging whether the input meets preset condition;When meeting preset condition, described in processing
The part for the image that image collecting device 11 acquires obtains the local target image, controls the display device 12 and shows
The target image;
Wherein, objective body is located at the lap;Described image acquisition device 11 acquires the image of the objective body;Institute
It states target image and is greater than the corresponding target body portion in the part in the reflection unit 13 in the size in display device 12
At virtual images size.
Wherein, the preset condition refers to the condition for triggering local enlargement display function.For example, the preset condition
It can be user and have input a Pre-defined gesture, as user touches face with hand.For another example, the preset condition can be use
Person has input a predefined voice.
Optionally, the input acquisition device 15 and described image acquisition device 11 can be the same device.
For example, the input acquisition device 15 can be the same camera with described image acquisition device 11, it is e.g., described
The same camera is depth camera, can be used to detect the input for having depth information;That is, passing through the same camera shooting
Head can acquire the image in image acquisition region and acquire the input of user.
Optionally, the input acquisition device 15 is different device with described image acquisition device 11.
Here, the difference refers to that the two is not the same device.Illustratively, the difference can refer to the type of device
Difference, for example, described image acquisition device 11 is camera, the input acquisition device 15 is microphone.Illustratively, described
Difference can refer to same kind of different device, for example, described image acquisition device 11 is camera A, the input acquisition
Device 15 is camera B.Illustratively, the difference can refer to that the number of device is different, for example, described image acquisition device 11
It is made of camera A, camera B, camera C;The input acquisition device 15 is camera D.
Optionally, the processing unit 14, is also used to:
A datum mark is determined according to the input of the input user collected of acquisition device 15;
The part is determined based on the datum mark.
As an implementation, the processing unit 14, is also used to:
A datum mark is determined in the image in described image pickup area according to the input;
It is determined based on the datum mark comprising the part including the datum mark.
As another embodiment, the processing unit 14, is also used to:
Manipulation region corresponding to the determining and input in image in image acquisition region;
Determine the part corresponding with the manipulation region.
Here, it should be noted that starting triggering partial enlargement and display function and determining datum mark can be by one
What secondary property detection was completed.Illustratively, the trigger condition for triggering local enlargement display function is that user is touched with finger
Some position of body, for example, user has a touch with cheek with finger, when input acquisition device detects the operation, notice
Processing unit determines datum mark, and processing unit can determine datum mark based on the operation, that is, the position at the position for being touched user
It sets and is determined as datum mark.
Certainly, starting triggering partial enlargement and display function and determining datum mark can be property detection in two times.Example
Property, the trigger condition for triggering local enlargement display function is user using certain gestures, such as the gesture of scissors hand, than
Such as, user is made that the gesture of a scissors hand in the front of electronic equipment, when input acquisition device detects the gesture, really
Fixed starting detection, input acquisition device detect that user has a touch with cheek with finger, and notifier processes device determines datum mark, place
Reason device can determine datum mark based on the operation, that is, the position at the position that user is touched is determined as datum mark.
In a specific embodiment, the processing unit 14 is based on the input acquisition device 15 user collected
Input determine a datum mark, and determine the position coordinates of the datum mark, wherein the coordinate is the seat under coordinate acquisition system
Mark determines the range in pre-determined distance centered on the position coordinates, and notice described image acquisition device 11 obtains within the scope of this
Image.Here, the image within the scope of this is the part.
Fig. 6 shows a kind of input according to user and determines local a kind of schematic diagram, as shown in fig. 6, input acquisition
Device 15 collects user and has touched nose with finger, and processing unit 14 obtains the finger touch position according to the operation
Position coordinates;Then, centered on the position coordinates, the acquisition range in preset range, notice described image acquisition dress are determined
Set the image of the 11 acquisition acquisition ranges.In this way, user can inform electronic equipment with a kind of simple gesture advice method
Its position to be amplified to be amplified.
Again for example, input acquisition device 15 collects certain gestures (the static 2s of such as two fingers), then the processing unit
14 using fingertip location as datum mark, and determines part based on the datum mark.So image collecting device 11 acquires the part
Image, the image that processing unit 14 acquires described image acquisition device 11 analyzes, the topography that obtains that treated.
Technical solution described in the present embodiment can be applied to the detection to particular attachment.At this point it is possible to there is specific shape by setting,
The triggering modes such as color or other identifier trigger the display function of electronic equipment.For example, handheld humidity measurement instrument, if electric
Sub- equipment determines that datum mark is the end regions of humidity of skin detector, then electronic equipment will be in a certain position pair of display device
Humidity of skin testing result shown by the end regions of humidity of skin detector is shown.
Certainly, the particular attachment can also be a part of other equipment or equipment.Correspondingly, electronic equipment is aobvious
The display effect to match with the equipment can be shown on showing device.For example, for physical signs detection device, electronics
The display device of equipment will show skin physiology parameters index, such as cleannes.
As an implementation, however, it is determined that after magnification region, detect that datum mark changes, then magnification region follows
The variation of the datum mark and change;Wherein, the magnification region is the region based on determined by datum mark.
For example, if the positioning datum of user changes, magnification region follows change after determining magnification region.Citing comes
It says, for example finger, when humidity of skin detector is mobile, the datum mark based on the finger touch position is becoming, then, it is based on
The magnification region that datum mark determines also is changing.
In one embodiment, however, it is determined that after magnification region, detect that datum mark leaves and corresponding in magnification region
Target body portion move, then lock in former magnification region corresponding target body portion, the magnification region follows institute
It states the movement of target body portion and moves.
Again for example, input acquisition device 15 collects user and has touched on nose, 14 basis of processing unit
The operation obtains the position coordinates of the nose;Notice image collecting device 11 is based on the position coordinates and is focused to the nose,
Obtain image relevant to nose;When detecting user's movement nose, notice image collecting device 11 follows the nose after movement
The position coordinates of point are focused the nose, obtain realtime graphic relevant to nose.In this way, when partial enlargement is opened in triggering
After mode, it is capable of real-time acquisition the image at the position drawn a circle to approve in advance.
In one embodiment, however, it is determined that after magnification region, detect that datum mark leaves and corresponding in magnification region
Target body portion move, then fix the magnification region, the target body portion for newly entering the magnification region is true
It is set to part to be amplified.
In a specific embodiment, the processing unit 14 is based on the input acquisition device 15 user collected
Input trajectory, determine the corresponding position coordinates of the input trajectory, notice described image acquisition device 11 obtains the spatial position
Image in range;In this way, the image fallen within the scope of the spatial position is the part.
Fig. 7 shows a kind of input according to user and determines the local another schematic diagram, and input acquisition device 15 is adopted
Collect user and drawn a circle in the sky with hand, processing unit 14 obtains the position coordinates range of the circle according to the operation, leads to
Know that described image acquisition device 11 obtains the image within the scope of the position coordinates for including the circle.For example, if user draws a circle with hand
When, the range of the circle includes left half of face, then, when the mobile face of user, if right one side of something face is fallen into involved in the circle
Spatial position range will then amplify right half of face.In this way, being acquired in real time after partial enlargement mode is opened in triggering
Fall into the image within the scope of the position coordinates drawn a circle to approve in advance.
Electronic equipment described in the present embodiment, electronic equipment determine the local range according to the input of user.Therefore, it uses
Family can inform its position to be processed to be dealt with of electronic equipment by a kind of mode that is simply input, and can trigger naturally
Local display function keeps the operation for checking regional area simpler.
Embodiment three
A kind of electronic equipment, comprising:
Image collecting device 11, for acquiring the image in image acquisition region;
Display device 12, for exporting image;
Reflection unit 13, for rendering virtual images in reflecting region, the reflecting region and described image acquisition zone
There is lap in domain;
Processing unit 14, for handling the office for the image that described image acquisition device 11 acquires when meeting preset condition
Portion obtains the local target image, controls the display device 12 and show the target image;
Wherein, the part corresponding one is located at the objective body of lap, and the target image is in display device 12
In size be greater than the objective body in the reflection unit 13 at virtual images size.
Optionally, the processing unit 14, is also used to:
Determine a display position;
It controls the display device 12 and shows the target image at the display position.
Wherein, the target image can be video format, be also possible to picture format.
In a specific embodiment, when meeting preset condition, 11 continuous collecting image of described image acquisition device, institute
The part that processing unit 14 persistently handles the image of the acquisition of described image acquisition device 11 is stated, the local target figure is obtained
Picture, the continuously display processing unit 14 of display device 12 treated image.
In a specific embodiment, when meeting preset condition, described image acquisition device 11 acquires a frame image, institute
The part that processing unit 14 handles the image that described image acquisition device 11 acquires is stated, the local target image, institute are obtained
It states display device 12 and shows the processing unit 14 treated the frame image.
Illustratively, the display position is fixed position.For example, the fixed position can be a left side for display device 12
Upper angle or the lower left corner or the upper right corner or the lower right corner, etc..
Certainly, the fixed position can be system default setting, or preparatory according to the use habit of oneself by user
Setting.
In this way, can on the fixation position of display device displaying target image.
In a specific embodiment, the control display device 12 shows the mesh at the display position 12
Logo image, comprising:
The display device 12 reads display position information, shows at display position corresponding with the display position information
Show the target image.
In a specific embodiment, the processing unit 14 knows fixed display position for a left side from system set-up information
Upper angle, then, target image is sent to display device 12 by the processing unit 14, while being sent for characterizing the aobvious of the upper left corner
Show location information, the display device 12 will show the target image based on the display position information.
In above scheme, it is preferable that the electronic equipment further include:
Acquisition device 15 is inputted, for acquiring the input of user;
Correspondingly, the processing unit 14, is also used to judge whether the input meets preset condition;Item is preset when meeting
When part, the part for the image that processing described image acquisition device 11 acquires obtains the local target image, controls described aobvious
Showing device 12 shows the target image.
Optionally, the processing unit 14, is also used to:
Determine the corresponding target body portion in the part in the reflection unit 13 at virtual images first position;
It controls the display device 12 and the target image is shown based on the first position.
Wherein, the first position refer to the corresponding target body portion in the part in reflection unit 13 institute at virtual image figure
Position in the region of picture.
In a specific embodiment, the processing unit 14, is also used to:
Determine the corresponding target body portion in the part in the reflection unit 13 at virtual images first area;
It controls the display device 12 and the target image is shown based on the second area for including the first area.
In this way, electronic equipment can in conjunction with institute at the virtual image corresponding position displaying target image, enhance target image with void
The relevance of picture.In this way, being conducive to the popularization and application of the partial enlargement function, the interest that user uses partial enlargement function is improved
Taste and visual effect.
In a specific embodiment, the processing unit 14 determine the part in reflection unit 13 at the virtual image
Position coordinates;The display device 12, which is controlled, based on the position coordinates shows the target image.
It should be understood that by the target image size be greater than the part in reflection unit 13 institute at virtual image figure
Therefore the size of picture shows the target image at virtual image position in the part institute, can more enhance target image and the virtual image
Relevance, improve the usage experience of user.
For example, user can perceive oneself institute in reflection unit by reflection unit (such as light transmission mirror) originally
The virtual image for being in;But in general, in the reflection unit at the size of the virtual image and the actual size of user be the same.By
In not enough greatly, in order to be more clearly visible that some position, so needing to amplify display, shown in the part institute at virtual image position
Show the target image, the movement that sight is moved to other positions can be reduced.Here, the light transmission mirror, which refers to, meets centainly
Light transmittance, the display for not influencing the display device are perceived.
In an optional embodiment, the processing unit 14 determine the part in reflection unit 13 at the virtual image
Position coordinates, comprising:
The local position coordinates are determined using described image acquisition device 11;Wherein, the position coordinates are to adopt
Collect coordinate system or acquires the coordinate in image;
According to the mapping relations of coordinate acquisition system and displaing coordinate system determine the part in reflection unit 13 institute at void
The position coordinates of picture.
Wherein, the coordinate acquisition system refers to the frame of reference of described image acquisition device 11, the displaing coordinate system
It is the frame of reference of the display device 12.
Here, when described image acquisition device is identical as the positional relationship of the display device, coordinate acquisition system and aobvious
Show that identical (for example image collecting device overlaps coordinate system after display device in Fig. 4, and at center.);Work as described image
Acquisition device is different from the positional relationship of the display device, and there are certain mapping relations with displaing coordinate system for coordinate acquisition system
(such as the image collecting device in Fig. 3 and the display device setting up and down), can be according to this mapping relations by the part
Position coordinates be converted to the displaing coordinate of display device.
For example, the part is left ear, as shown in figure 8, described image acquisition device 11 determines the position of left ear
Set coordinate;According to the mapping relations of coordinate acquisition system and displaing coordinate system determine left ear in reflection unit 13 at the virtual image
Then position coordinates send display device 12 for the location coordinate information and amplified left knowledge logo image, by showing
Device shows amplified left ear at virtual image position in left ear institute.
Certainly, if the display position for the target image specified in the input information that input acquisition device 15 acquires comprising user
When information, then processing unit 14 is matched based on the display position information determination of the specified target image with display device 12
Display position, and notify the display position of display device 12.
For example, the part is chin, and it is to specify aobvious that the input collected user of acquisition device 11, which inputs information,
The upper left corner of showing device 12 shows amplified chin;Wherein, it is voice messaging that the user, which inputs information, then processing unit
The location information in the upper left corner and amplified chin image information are notified display device 12 by 14, and display device 12 will be in upper left
Angle shows the amplified chin image.
In this way, electronic equipment can determine the display position of target image according to the input of user, thus user is to target
More there is autonomous selectivity in the display position of image.
Embodiment three
A kind of electronic equipment, comprising:
Image collecting device 11, for acquiring the image in image acquisition region;
Display device 12, for exporting image;
Reflection unit 13, for rendering virtual images in reflecting region, the reflecting region and described image acquisition zone
There is lap in domain;
Processing unit 14, for handling the office for the image that described image acquisition device 11 acquires when meeting preset condition
Portion obtains the local target image, controls the display device 12 and show the target image;
Wherein, the part corresponding one is located at the objective body of lap, and the target image is in display device 12
In size be greater than the objective body in the reflection unit 13 at virtual images size.
Optionally, when the processing unit 14 obtains the local target image, comprising:
Amplification factor is obtained from system set-up information;
The local target image is obtained based on the amplification factor.
In this way, being arranged without using person, electronic equipment will be set as user according to system and the amplified office is presented
The image in portion.
Need, if having N number of user in the image acquisition region of image collecting device 11, N be greater than or
Positive integer equal to 2;Electronic equipment can have shows the local mode as follows:
If there is M user to provide the input for meeting preset condition, the office of each user in the M user is determined respectively
Portion;And the target image of the part of each user is shown respectively, it is not overlapped between each target image;Wherein, M is positive whole
Number, M≤N;
If there is M user to provide the input for meeting preset condition, it is determined that the nearest user with the display device 12, and
It determines the part of the user, and only shows the target image of the part of the user.As an implementation, described image
Acquisition device 11 is depth camera, can determine the use nearest apart from the display device 12 by the depth camera
Family.As another embodiment, described image acquisition device 11 is common camera, will include useful in acquired image
The maximum image at family is determined as the user nearest with the display device 12.
In above scheme, it is preferable that the electronic equipment further include:
Acquisition device 15 is inputted, for acquiring the input of user;
Correspondingly, the processing unit 14, is also used to judge whether the input meets preset condition;Item is preset when meeting
When part, the part for the image that processing described image acquisition device 11 acquires obtains the local target image, controls described aobvious
Showing device 12 shows the target image.
Optionally, the processing unit 14, is also used to:
It obtains target and amplifies threshold value;
Processing is amplified to the part according to target amplification threshold value, obtains target image.
In an optional embodiment, the processing unit 14 is also used to:
It controls the display device 12 and shows alternative amplification threshold value;
The selected amplification threshold value of user is determined according to the acquisition information of the input acquisition device 11;Wherein, described
Acquisition information includes: the input information and visual information of user.
In a specific embodiment, the input acquisition device 11 acquires the input information of user, wherein described defeated
Enter the spatial positional information of spatial positional information or a certain finger tip that information includes hand, the input acquisition device 11 is also adopted
Collect the visual information of user, wherein the visual information includes the spatial positional information where eyes;Then, it is based on two o'clock
The principle of being aligned determines the extended line of eyes and finger tip and the intersection point of display device 12, and judges to put corresponding to the intersection point
Big threshold value, and then a certain amplification number of threshold values in the intersection point or preset range centered on the intersection point is determined as target amplification
Threshold value.Fig. 9 shows a kind of schematic diagram that a kind of user selects amplification threshold value by mirror image, as shown in figure 9, user stretches out
Arm is directed toward in display device × 5 position with finger tip, then selected by processing unit can learn it according to above-mentioned Computing Principle
Amplifying threshold value is × 5.
In this way, processing unit 14 can determine the selected target of user by inputting the acquisition information of acquisition device 11
Amplify threshold value, user can it wants to promote the local amplification factor informing electronic equipment by way of sky selection
The usage experience of user.
In another specific embodiment, the input acquisition device 11 has the function of ranging and angle measurement, as shown in Figure 10,
The input acquisition device 11 first determines the eyes position of user, then determines the input acquisition device 11 and eyes
Distance S and with horizontal plane angle α, then, the input is calculated according to S and α in the input acquisition device 11
Amplification threshold value corresponding to the position can be determined as target by the position of 11 distance display device 12 of acquisition device, processing unit 14
Amplify threshold value.
In this way, processing unit 14 can make full use of the function of input acquisition device 11 to determine the selected target of user
Amplify threshold value.
In another specific embodiment, the input acquisition device 11 acquires voice messaging, the voice messaging characterization
The amplification threshold value of selection × 5, then processing unit 14 determines that selected amplification threshold value is × 5 according to the voice messaging.
In this way, processing unit 14 can determine the selected target of user by inputting the acquisition information of acquisition device 11
Amplify threshold value, user can inform electronic equipment by way of voice, and it wants to improve use to how many times of the partial enlargement
The usage experience at family.
In another specific embodiment, the acquisition of the input acquisition device 11 input information, the input information is spy
Fixed gesture motion information, specific gesture motion representative × y each time in preset time, wherein the y refers to relative to institute
State the amplification factor of the local virtual image.If inputting the specific gesture motion collected within a preset period of time of acquisition device 11
Number is x, and the x is the positive integer more than or equal to 1, then processing unit 14 is put according to representated by the specific action gesture
Big multiple and the number of the specific action gesture determine object magnification × yx.In this way, user can be by specific
The mode of gesture motion informs electronic equipment, and it wants to allow the local amplification factor, improves the usage experience of user.
Example IV
Figure 11 is the composed structure schematic diagram three of electronic equipment provided in an embodiment of the present invention, as shown in Figure 10, the electricity
Sub- equipment, comprising:
Image acquisition units 21, for acquiring the image in image acquisition region;
Display unit 22, for exporting image;
Reflector element 23, for rendering virtual images in reflecting region, the reflecting region and described image acquisition zone
There is lap in domain;
Processing unit 24, for handling the office for the image that described image acquisition unit 21 acquires when meeting preset condition
Portion obtains the local target image, controls the display unit 22 and show the target image;
Wherein, the part corresponding one is located at the objective body of lap, and the target image is in display unit 22
In size be greater than the objective body in the reflector element 23 at virtual images size.
Here, the preset condition refers to the condition for triggering local enlargement display function.For example, the preset condition
It can be user and have input a Pre-defined gesture, as user touches face with hand.For another example, the preset condition can be use
Person has input a predefined voice.
Wherein, the reflector element 23 is Chong Die with described image acquisition unit 21, and the reflector element 23 is positioned at described
In the image acquisition region of image acquisition units 21, the acquisition of described image acquisition unit 21 is not influenced;The reflector element 23
It is Chong Die with the display unit 22, and the reflector element 23 is located in the sensing region of the display unit 22, does not influence institute
The display for stating display unit 22 is perceived.
Optionally, the electronic equipment further include:
Acquisition unit 25 is inputted, for acquiring the input of user;
The processing unit 24, is also used to judge whether the input meets preset condition;
Wherein, the acquisition unit 25 and described image acquisition unit 21 are the same device or the acquisition unit 25
Device is different with described image acquisition unit 21.
Optionally, the processing unit 24, is also used to:
A datum mark is determined in the image in described image pickup area according to the input;
It is determined based on the datum mark comprising the part including the datum mark;
Or
Manipulation region corresponding to the determining and input in image in image acquisition region;
Determine the part corresponding with the manipulation region.
Optionally, the processing unit 24, is also used to:
Determine a display position;
It controls the display unit 22 and shows the target image at the display position.
Optionally, the processing unit 24, is also used to:
Determine the corresponding target body portion in the part in the reflector element 23 at virtual images first position;
Wherein, the first position be the corresponding target body portion in the part in the reflector element 23 at virtual images area
Position in domain;
It controls the display unit 22 and the target image is shown based on the first position.
In a specific embodiment, the processing unit 24, is also used to:
Determine the corresponding target body portion in the part in the reflector element 23 at virtual images first area;
It controls the display unit 22 and the target image is shown based on the second area for including the first area.
Optionally, the processing unit 24, is also used to:
It obtains target and amplifies threshold value;
Processing is amplified to the part according to target amplification threshold value, obtains target image.
Optionally, the processing unit 24, is also used to:
It controls the display unit 22 and shows alternative amplification threshold value;
The selected amplification threshold value of user is determined according to the acquisition information of the input acquisition unit 25;Wherein, described
Acquisition information includes: input information, the visual information of user.
In practical application, described image acquisition unit 21 can be realized by the camera in electronic equipment;The display unit
22 can by electronic equipment display or display screen realize;The input acquisition unit 25 can be by the camera in electronic equipment
Or microphone etc. is realized;The reflector element 23 can by electronic equipment display or display screen realize;The processing unit
24 specific structure may both correspond to processor.The specific structure of processor can have for CPU, MCU, DSP or PLC etc.
The electronic component of processing function or the set of electronic component.Wherein, the processor includes executable code, described to hold
Line code is stored in a storage medium, the processor can by being connected in the communication interfaces such as bus and the storage medium,
When executing the corresponding function of specific each module, is read from the storage medium and run the executable code.It is described
Storage medium is preferably non-moment storage medium for storing the part of the executable code.
Electronic equipment described in the present embodiment handles the part of acquired image when meeting preset condition, described in acquisition
The target image of part, and show the target image, in this way, can show and be greater than in reflecting region when meeting trigger condition
Objective body at virtual images image, allow user to see the image of virtual images corresponding greater than objective body.
Embodiment five
Based on electronic equipment described in above-mentioned each embodiment, the embodiment of the invention also provides a kind of image processing methods
Method, as shown in figure 12, described image processing method mainly comprise the steps that
Step 101: the image in acquisition image acquisition region.
Specifically, pass through the image in the image acquisition device image acquisition region in electronic equipment.
Step 102: when meeting preset condition, handling the part of described image, obtain the local target image.
In above scheme, the method also includes:
Acquire the input of user;
Judge whether the input meets preset condition.
Specifically, the input of user is acquired by the input acquisition device in electronic equipment.
Wherein, the preset condition refers to the condition for triggering local enlargement display function.For example, the preset condition
It can be user and have input a Pre-defined gesture, as user touches face with hand.For another example, the preset condition can be use
Person has input a predefined voice.
As an implementation, the part of the processing described image, comprising:
A datum mark is determined in the image in described image pickup area according to the input;
It is determined based on the datum mark comprising the part including the datum mark.
In a specific embodiment, the processing unit is based on the defeated of the input acquisition device user collected
Entering and determine a datum mark, and determines the position coordinates of the datum mark, wherein the coordinate is the coordinate under coordinate acquisition system,
Centered on the position coordinates, the range in pre-determined distance is determined, notice described image acquisition device obtains the figure within the scope of this
Picture.Here, the image within the scope of this is the part.
For example, it collects user and has touched nose with finger, which is obtained according to the operation
Position coordinates;Then, centered on the position coordinates, the acquisition range in preset range, notice described image acquisition are determined
Device obtains the image of the acquisition range.In this way, user can inform electronic equipment with a kind of simple gesture advice method
Its position to display.
As an implementation, the part of the processing described image, comprising:
Manipulation region corresponding to the determining and input in image in image acquisition region;
Determine the part corresponding with the manipulation region.
In a specific embodiment, the input trajectory based on the input acquisition device user collected, determines
The corresponding position coordinates of the input trajectory, notice described image acquisition device obtain the image within the scope of the spatial position;In this way,
The image fallen within the scope of the spatial position is the part.
As an implementation, however, it is determined that after magnification region, detect that datum mark changes, then magnification region follows
The variation of the datum mark and change;Wherein, the magnification region is the region based on determined by datum mark.
For example, if the positioning datum of user changes, magnification region follows change after determining magnification region.Citing comes
It says, for example finger, when humidity of skin detector is mobile, the datum mark based on the finger touch position is becoming, then, it is based on
The magnification region that datum mark determines also is changing.
In one embodiment, however, it is determined that after magnification region, detect that datum mark leaves and corresponding in magnification region
Target body portion move, then lock in former magnification region corresponding target body portion, the magnification region follows institute
It states the movement of target body portion and moves.
Again for example, it collects user to have touched on nose, be sat according to the position that the operation obtains the nose
Mark;Notice image collecting device is based on the position coordinates and is focused to the nose, obtains image relevant to nose;Work as detection
When nose mobile to user, notice image collecting device follows the position coordinates of the nose after movement to be focused the nose,
Obtain realtime graphic relevant to nose.In this way, being capable of real-time acquisition and drawing a circle to approve in advance after partial enlargement mode is opened in triggering
The image at position.
In one embodiment, however, it is determined that after magnification region, detect that datum mark leaves and corresponding in magnification region
Target body portion move, then fix the magnification region, the target body portion for newly entering the magnification region is true
It is set to part to be amplified.
In a specific embodiment, the input trajectory based on the input acquisition device user collected, determines
The corresponding position coordinates of the input trajectory, notice described image acquisition device obtain the image within the scope of the spatial position;In this way,
The image fallen within the scope of the spatial position is the part.
For example, it collects user and has drawn a circle in the sky with hand, sat according to the position that the operation obtains the circle
Range is marked, notice described image acquisition device obtains the image within the scope of the position coordinates for including the circle.For example, if user with
When hand is drawn a circle, the range of the circle includes left half of face, then, when the mobile face of user, if right one side of something face falls into the circle institute
The spatial position range being related to will then amplify right half of face.In this way, after partial enlargement mode is opened in triggering, it is real
When acquisition fall into image within the scope of the position coordinates drawn a circle to approve in advance.
Step 103: control display device shows the target image.
Wherein, the part corresponding one is located at the objective body of lap, and the target image is in display device 12
In size be greater than the objective body in reflection unit 13 at virtual images size.
In this way, can show the image for being greater than reflected image when meeting trigger condition, improve the experience of user.
In one embodiment, the control display device shows the target image, comprising:
Determine a display position;
It controls the display device and shows the target image at the display position.
Here, the display position can be fixed position.For example, the fixed position can be the upper left of display device
Angle or the lower left corner or the upper right corner or the lower right corner, etc..Certainly, the fixed position can be system default setting, or by making
User presets according to the use habit of oneself.
In another embodiment, the control display device shows the target image, comprising:
Determine the part in the reflection unit at virtual images first position;Wherein, the first position
Refer to the corresponding target body portion in the part in reflection unit at the virtual image position;
It controls the display device and the target image is shown based on the first position.
In a specific embodiment, the control display device shows the target image, comprising:
Determine the corresponding target body portion in the part in the reflection unit at virtual images first area;
It controls the display device and the target image is shown based on the second area for including the first area.
In this way, electronic equipment can in conjunction with institute at the virtual image corresponding position displaying target image, enhance target image with void
The relevance of picture.In this way, being conducive to the popularization and application of the partial enlargement function, the interest that user uses partial enlargement function is improved
Taste and visual effect.
In one embodiment, the control display device shows the target image, comprising:
It obtains target and amplifies threshold value;
Processing is amplified to the part according to target amplification threshold value, obtains target image.
It so, it is possible to adjust to the multiple locally amplified.
In one embodiment, the control display device shows the target image, comprising:
It controls the display device and shows alternative amplification threshold value;
The selected amplification threshold value of user is determined according to the acquisition information of the input acquisition device;Wherein, described to adopt
Collection information includes: the input information and visual information of user.
In a specific embodiment, the input information of the input acquisition device acquisition user, wherein the input
Information includes the spatial positional information of hand or the spatial positional information of a certain finger tip, and the input acquisition device, which also acquires, to be made
The visual information of user, wherein the visual information includes the spatial positional information where eyes;Then, based on two o'clock at one
The principle of line determines the extended line of eyes and finger tip and the intersection point of display device, and judges amplification threshold value corresponding to the intersection point,
And then a certain amplification number of threshold values in the intersection point or preset range centered on the intersection point is determined as target amplification threshold value.It lifts
For example, user makes a stretch of the arm, and × 5 position is directed toward in display device with finger tip, then processing unit is former according to above-mentioned calculating
Reason can learn that the amplification threshold value selected by it is × 5.
In this way, can determine that the selected target of user amplifies threshold value, makes by inputting the acquisition information of acquisition device
User can it wants to improve the use of user to the local amplification factor informing electronic equipment by way of sky selection
Experience.
The present embodiment the method can be applied in electronic equipment described in above-mentioned each embodiment.
For example, the electronic equipment can be mobile phone, plate, laptop, TV, Intelligent mirror etc..
The present embodiment the method can be shown greater than objective body institute in reflecting region when meeting trigger condition into the virtual image
The image of image allows user to see the image of virtual images corresponding greater than objective body, carries out convenient for user to the objective body
Observation, or even result does certain processing operation according to the observation, improves the usage experience of user.
In several embodiments provided by the present invention, it should be understood that disclosed method, apparatus and electronic equipment,
It may be implemented in other ways.Apparatus embodiments described above are merely indicative, for example, the unit is drawn
Point, only a kind of logical function partition, there may be another division manner in actual implementation, such as: multiple units or components can
To combine, or it is desirably integrated into another system, or some features can be ignored or not executed.In addition, shown or discussed
The mutual coupling of each component part or direct-coupling or communication connection can be through some interfaces, equipment or unit
Indirect coupling or communication connection can be electrical, mechanical or other forms.
Above-mentioned unit as illustrated by the separation member, which can be or may not be, to be physically separated, aobvious as unit
The component shown can be or may not be physical unit, it can and it is in one place, it may be distributed over multiple network lists
In member;Some or all of units can be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
In addition, each functional unit in various embodiments of the present invention can be fully integrated in one processing unit, it can also
To be each unit individually as a unit, can also be integrated in one unit with two or more units;It is above-mentioned
Integrated unit both can take the form of hardware realization, can also realize in the form of hardware adds SFU software functional unit.
Those of ordinary skill in the art will appreciate that: realize that all or part of the steps of above method embodiment can pass through
The relevant hardware of program instruction is completed, and program above-mentioned can be stored in a computer readable storage medium, the program
When being executed, step including the steps of the foregoing method embodiments is executed;And storage medium above-mentioned include: movable storage device, it is read-only
Memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or
The various media that can store program code such as person's CD.
If alternatively, the above-mentioned integrated unit of the embodiment of the present invention is realized in the form of software function module and as independence
Product when selling or using, also can store in a computer readable storage medium.Based on this understanding, this hair
Substantially the part that contributes to existing technology can body in the form of software products in other words for the technical solution of bright embodiment
Reveal and, which is stored in a storage medium, including some instructions are with so that a computer is set
Standby (can be personal computer, server or network equipment etc.) executes the whole of each embodiment the method for the present invention
Or part.And storage medium above-mentioned includes: that movable storage device, ROM, RAM, magnetic or disk etc. are various can store journey
The medium of sequence code.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can easily think of the change or the replacement, and should all contain
Lid is within protection scope of the present invention.Therefore, protection scope of the present invention should be based on the protection scope of the described claims.
Claims (15)
1. a kind of electronic equipment, comprising:
Image collecting device, for acquiring the image in image acquisition region;
Display device, for exporting image;
Reflection unit, the virtual images in reflecting region, the reflecting region and described image pickup area have weight for rendering
Folded part;
Processing unit, for when meeting preset condition, the part of the image of processing described image acquisition device acquisition to obtain institute
Local target image is stated, the display device is controlled and shows the target image;
Wherein, the part corresponding one is located at the objective body of lap, and the ruler of the target image in a display device
It is very little greater than the objective body at virtual images size.
2. electronic equipment according to claim 1, which is characterized in that
The reflection unit is Chong Die with described image acquisition device, and the reflection unit is located at the figure of described image acquisition device
As not influencing the acquisition of described image acquisition device in pickup area;
The reflection unit is Chong Die with the display device, and the reflection unit is located at the sensing region of the display device
Interior, the display for not influencing the display device is perceived.
3. electronic equipment according to claim 1, which is characterized in that the electronic equipment further include:
Acquisition device is inputted, for acquiring the input of user;
The processing unit, is also used to judge whether the input meets preset condition;
Wherein, the input acquisition device and described image acquisition device be the same device or the input acquisition device and
Described image acquisition device is different device.
4. electronic equipment according to claim 3, which is characterized in that the processing unit is also used to:
A datum mark is determined in the image in described image pickup area according to the input;
It is determined based on the datum mark comprising the part including the datum mark;
Or
Manipulation region corresponding to the determining and input in image in image acquisition region;
Determine the part corresponding with the manipulation region.
5. electronic equipment according to claim 1, which is characterized in that the processing unit is also used to:
Determine a display position;
It controls the display device and shows the target image at the display position.
6. electronic equipment according to claim 5, which is characterized in that the processing unit is also used to:
Determine the corresponding target body portion in the part in the reflection unit at virtual images first position;
It controls the display device and the target image is shown based on the first position.
7. electronic equipment according to claim 3, which is characterized in that the processing unit is also used to:
It obtains target and amplifies threshold value;
Processing is amplified to the part according to target amplification threshold value, obtains target image.
8. electronic equipment according to claim 7, which is characterized in that the processing unit is also used to:
It controls the display device and shows alternative amplification threshold value;
The selected amplification threshold value of user is determined according to the acquisition information of the input acquisition device;Wherein, the acquisition letter
Breath includes: the input information and visual information of user.
9. a kind of image processing method, comprising:
Acquire the image in image acquisition region;
When meeting preset condition, the part of described image is handled, obtains the local target image;
Control display device shows the target image;
Wherein, the part corresponding one is located at the objective body of lap, and the ruler of the target image in a display device
It is very little greater than the objective body at virtual images size.
10. according to the method described in claim 9, it is characterized in that, the method also includes:
Acquire the input of user;
Judge whether the input meets preset condition.
11. according to the method described in claim 10, it is characterized in that, the part of the processing described image, comprising:
A datum mark is determined in the image in described image pickup area according to the input;
It is determined based on the datum mark comprising the part including the datum mark;
Or
Manipulation region corresponding to the determining and input in image in image acquisition region;
Determine the part corresponding with the manipulation region.
12. according to the method described in claim 9, it is characterized in that, the control display device shows the target image, packet
It includes:
Determine a display position;
It controls the display device and shows the target image at the display position.
13. according to the method described in claim 9, it is characterized in that, the control display device shows the target image, packet
It includes:
Determine the corresponding target body portion in the part in reflection unit at virtual images first position;
It controls the display device and the target image is shown based on the first position.
14. according to the method described in claim 9, it is characterized in that, the control display device shows the target image, packet
It includes:
It obtains target and amplifies threshold value;
Processing is amplified to the part according to target amplification threshold value, obtains target image.
15. according to the method for claim 14, which is characterized in that the control display device shows the target image,
Include:
It controls the display device and shows alternative amplification threshold value;
The selected amplification threshold value of user is determined according to the acquisition information of input acquisition device;Wherein, the acquisition packet
It includes: the input information and visual information of user.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610395033.XA CN106095088B (en) | 2016-06-06 | 2016-06-06 | A kind of electronic equipment and its image processing method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610395033.XA CN106095088B (en) | 2016-06-06 | 2016-06-06 | A kind of electronic equipment and its image processing method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106095088A CN106095088A (en) | 2016-11-09 |
CN106095088B true CN106095088B (en) | 2019-03-08 |
Family
ID=57447777
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610395033.XA Active CN106095088B (en) | 2016-06-06 | 2016-06-06 | A kind of electronic equipment and its image processing method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106095088B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111651040B (en) * | 2020-05-27 | 2021-11-26 | 华为技术有限公司 | Interaction method of electronic equipment for skin detection and electronic equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1627790A (en) * | 2003-12-08 | 2005-06-15 | Lg电子有限公司 | Method of scaling partial area of main picture |
CN1874407A (en) * | 2006-04-20 | 2006-12-06 | 中国海洋大学 | Method for magnifying content displayed on screen of handset locally |
US8782565B2 (en) * | 2012-01-12 | 2014-07-15 | Cisco Technology, Inc. | System for selecting objects on display |
CN105075246A (en) * | 2013-02-20 | 2015-11-18 | 微软公司 | Providing a tele-immersive experience using a mirror metaphor |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4462819B2 (en) * | 2002-09-26 | 2010-05-12 | ソニー株式会社 | Information processing apparatus and method, recording medium, and program |
-
2016
- 2016-06-06 CN CN201610395033.XA patent/CN106095088B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1627790A (en) * | 2003-12-08 | 2005-06-15 | Lg电子有限公司 | Method of scaling partial area of main picture |
CN1874407A (en) * | 2006-04-20 | 2006-12-06 | 中国海洋大学 | Method for magnifying content displayed on screen of handset locally |
US8782565B2 (en) * | 2012-01-12 | 2014-07-15 | Cisco Technology, Inc. | System for selecting objects on display |
CN105075246A (en) * | 2013-02-20 | 2015-11-18 | 微软公司 | Providing a tele-immersive experience using a mirror metaphor |
Also Published As
Publication number | Publication date |
---|---|
CN106095088A (en) | 2016-11-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11896893B2 (en) | Information processing device, control method of information processing device, and program | |
JP6288372B2 (en) | Interface control system, interface control device, interface control method, and program | |
US9323343B2 (en) | Information processing method and information processing apparatus | |
US9329691B2 (en) | Operation input apparatus and method using distinct determination and control areas | |
JP6057396B2 (en) | 3D user interface device and 3D operation processing method | |
US10429941B2 (en) | Control device of head mounted display, operation method and operation program thereof, and image display system | |
CN104272218B (en) | Virtual hand based on joint data | |
JP6390799B2 (en) | Input device, input method, and program | |
EP3262505B1 (en) | Interactive system control apparatus and method | |
JP6350772B2 (en) | Information processing system, information processing apparatus, control method, and program | |
JP2004078977A (en) | Interface device | |
CN108027656A (en) | Input equipment, input method and program | |
US11199946B2 (en) | Information processing apparatus, control method, and program | |
CN109558004A (en) | A kind of control method and device of human body auxiliary robot | |
CN110520822A (en) | Control device, information processing system, control method and program | |
KR20130051319A (en) | Apparatus for signal input and method thereof | |
CN106095088B (en) | A kind of electronic equipment and its image processing method | |
KR101542671B1 (en) | Method and apparatus for space touch | |
WO2018076609A1 (en) | Terminal and method for operating terminal | |
CN108369477A (en) | Information processing unit, information processing method and program | |
JP2020123396A (en) | Eye pointing system operated by eyes | |
JP6700849B2 (en) | Measuring system | |
WO2023181549A1 (en) | Control device, control method, and program | |
US20240122469A1 (en) | Virtual reality techniques for characterizing visual capabilities | |
JP2023143634A (en) | Control apparatus, control method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |