WO2018201716A1 - 书画装置、书画设备及书画辅助方法 - Google Patents

书画装置、书画设备及书画辅助方法 Download PDF

Info

Publication number
WO2018201716A1
WO2018201716A1 PCT/CN2017/114769 CN2017114769W WO2018201716A1 WO 2018201716 A1 WO2018201716 A1 WO 2018201716A1 CN 2017114769 W CN2017114769 W CN 2017114769W WO 2018201716 A1 WO2018201716 A1 WO 2018201716A1
Authority
WO
WIPO (PCT)
Prior art keywords
painting
calligraphy
image
area
information
Prior art date
Application number
PCT/CN2017/114769
Other languages
English (en)
French (fr)
Inventor
牟鑫鑫
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to US16/088,624 priority Critical patent/US11107254B2/en
Publication of WO2018201716A1 publication Critical patent/WO2018201716A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink
    • G06V30/333Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/22Character recognition characterised by the type of writing
    • G06V30/226Character recognition characterised by the type of writing of cursive writing
    • G06V30/2268Character recognition characterised by the type of writing of cursive writing using stroke segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink
    • G06V30/333Preprocessing; Feature extraction
    • G06V30/347Sampling; Contour coding; Stroke extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/28Character recognition specially adapted to the type of the alphabet, e.g. Latin alphabet
    • G06V30/293Character recognition specially adapted to the type of the alphabet, e.g. Latin alphabet of characters other than Kanji, Hiragana or Katakana

Definitions

  • At least one embodiment of the present disclosure is directed to a painting and calligraphy apparatus, a painting and calligraphy apparatus, and a painting and calligraphy assisting method.
  • At least one embodiment of the present disclosure provides a painting and calligraphy apparatus, a painting and calligraphy apparatus, and a painting and calligraphy assisting method.
  • the painting and painting device applies an enhanced display technology, so that the user can copy the preset painting and painting information mapped in the painting and calligraphy area, improve the experience and effect of the practice in the process of repeated practice, and quickly improve the user's calligraphy and painting level.
  • At least one embodiment of the present disclosure provides a drawing and painting apparatus, comprising: a display portion configured to display preset painting and drawing information; an image collecting portion configured to collect an image in front of the user; and a control unit communicably connected to the display portion to control The unit is configured to control the display portion to display preset painting information, wherein the image in front of the user is processed to obtain a painting area, and the image light displayed by the display portion is transmitted to the user's eyes and the preset painting information is mapped in the painting area.
  • control unit is communicatively coupled to the image acquisition unit, and the control unit includes an identification module configured to process the image acquired by the image acquisition unit to obtain a calligraphy area.
  • the recognition module is further configured to recognize the painting sub-region in the painting region according to the image, and the control unit controls the display portion to map the sub-graphic of the preset painting information to the painting sub-region.
  • the identification module is further configured according to the figure.
  • the control unit controls the display portion to map the preset book drawing information around the first character and at a preset position within the calligraphy area.
  • the image capturing section is further configured to collect a depth image of the nib used by the user
  • the recognition module is further configured to recognize the three-dimensional coordinates of the nib during the movement according to the depth image.
  • the change is such that the control unit extracts the effective handwriting information, and the effective handwriting information is a motion trajectory when the distance between the pen tip and the writing carrier in the painting area is 0, and the motion trajectory is within a predetermined area in the painting area.
  • control unit is communicatively coupled to the image acquisition unit, and the control unit includes an identification circuit configured to process the image acquired by the image acquisition unit to obtain a calligraphy area.
  • the identification circuit is further configured to recognize the painting sub-area in the painting area according to the image, and the control unit controls the display part to map the sub-picture of the preset painting information to the painting sub-area.
  • the identification circuit is further configured to recognize the first character written by the user according to the image, and the control unit controls the display portion to map the preset painting information around the first character and At a preset position within the painting area.
  • the image capturing section is further configured to collect a depth image of the nib used by the user
  • the identification circuit is further configured to recognize the three-dimensional coordinates of the nib during the movement according to the depth image.
  • the change is such that the control unit extracts the effective handwriting information, and the effective handwriting information is a motion trajectory when the distance between the pen tip and the writing carrier in the painting area is 0, and the motion trajectory is within a predetermined area in the painting area.
  • control unit is communicatively coupled to the image acquisition unit, and the image acquisition unit includes an identification module configured to process the image to obtain a calligraphy area.
  • the identification module is further configured to recognize the painting sub-area in the painting area according to the image
  • the control unit is further configured to control the display part so that the sub-graphic of the preset painting information is to be preset. Mapped within the painting sub-area.
  • the identification module is further configured to recognize the first character written by the user according to the image
  • the control unit is further configured to control the display portion to map the preset painting information to the first The characters are around and at the preset position in the painting area.
  • the image capturing unit is further configured to collect a depth image of the nib used by the user, and the recognition module recognizes the nib according to the depth image during the movement.
  • the change of the three-dimensional coordinates is sent to the control unit to extract effective handwriting information, which is a motion trajectory when the distance between the pen tip and the writing carrier in the painting area is zero, and the motion trajectory is within a predetermined area in the painting area.
  • control unit is communicably connected to the image acquisition unit, and the image acquisition unit includes an identification circuit configured to process the image to obtain a calligraphy area.
  • the identification circuit is further configured to recognize the painting sub-area in the painting area according to the image
  • the control unit is further configured to control the display part to map the sub-graphics of the preset painting information. In the painting and painting sub-area.
  • the identification circuit is further configured to recognize the first character written by the user according to the image
  • the control unit is further configured to control the display portion to map the preset painting information to the first The characters are around and at the preset position in the painting area.
  • the image capturing unit is further configured to collect a depth image of the nib used by the user, and the recognition circuit recognizes the change of the three-dimensional coordinates of the nib during the movement according to the depth image and sends the image to the
  • the control unit extracts effective handwriting information, and the effective handwriting information is a motion trajectory when the distance between the pen tip and the writing carrier in the painting area is 0, and the motion trajectory is within a predetermined area in the painting area.
  • the recognition module further includes an identification module, and the identification module is respectively connected to the image acquisition unit and the control unit, and the identification module is configured to process the image collected by the image acquisition unit to obtain the calligraphy area.
  • the identification module is further configured to recognize the painting sub-area in the painting area according to the image
  • the control unit is further configured to control the display part to map the sub-graphics of the preset painting information. In the painting and painting sub-area.
  • the identification module is further configured to recognize the first character written by the user according to the image
  • the control unit is further configured to control the display portion to map the preset painting information to the first The characters are around and at the preset position in the painting area.
  • the image capturing section is further configured to collect a depth image of the nib used by the user
  • the recognition module is further configured to recognize the three-dimensional coordinates of the nib during the movement according to the depth image.
  • the change is sent to the control unit to extract valid handwriting information, which is a motion trajectory when the distance between the pen tip and the writing carrier in the painting area is zero, and the motion trajectory is within a predetermined area in the painting area.
  • the identification circuit and the identification circuit are further included.
  • Each of the image acquisition unit and the control unit is communicatively coupled, and the identification circuit is configured to process the image acquired by the image acquisition unit to obtain a calligraphy area.
  • the identification circuit is further configured to recognize the painting sub-area in the painting area according to the image
  • the control unit is further configured to control the display part to map the sub-graphics of the preset painting information. In the painting and painting sub-area.
  • the identification circuit is further configured to recognize the first character written by the user according to the image
  • the control unit is further configured to control the display portion to map the preset painting information to the first The characters are around and at the preset position in the painting area.
  • the image capturing section is further configured to collect a depth image of the nib used by the user
  • the identification circuit is further configured to recognize the three-dimensional coordinates of the nib during the movement according to the depth image.
  • the change is sent to the control unit to extract valid handwriting information, which is a motion trajectory when the distance between the pen tip and the writing carrier in the painting area is zero, and the motion trajectory is within a predetermined area in the painting area.
  • control unit is further configured to emphasize the display of the next stroke of the user based on the valid handwriting information.
  • control unit is further configured to determine and score according to the fit degree of the preset calligraphy information and the valid handwriting information, and send the display signal of the score to the display unit to display the score. result.
  • the method further includes: an external interface communicatively coupled to the control unit, the control unit further configured to determine and score according to the fit of the preset calligraphy information and the valid handwriting information, and Send the scored signal to the external interface.
  • the method further includes: storing at least one of the preset painting information and the effective handwriting information in the memory.
  • control unit is further configured to analyze the score to classify the writing condition of each stroke written by the user.
  • the display unit includes: a projection unit and a transflective portion, and the projection unit projects the preset calligraphy information displayed on the display unit to the transflective portion, and is semi-transparent.
  • the half-reverse portion reflects the image light of the preset painting information projected by the projection portion into the eyes of the user.
  • At least one embodiment of the present disclosure provides a book drawing apparatus including: a head worn; and a drawing device on the headwear, wherein the drawing device includes the drawing device provided by any of the embodiments of the present disclosure.
  • At least one embodiment of the present disclosure provides a method for assisting a calligraphy and painting, including: collecting a picture in front of a user The image is recognized according to the image; the image light of the preset calligraphy information is transmitted to the user's eyes and the preset calligraphy information is mapped in the painting area.
  • the method further includes: identifying a painting sub-region in the painting region according to the image; and mapping the sub-graphic of the preset painting information in the painting sub-region.
  • the method further includes: recognizing the first character written by the user according to the image; mapping the preset painting information to the preset position in the painting area around the first character At the office.
  • the method further includes: collecting a depth image of a pen tip in front of the user; and identifying a change of the three-dimensional coordinates of the pen tip during the motion according to the depth image to extract valid handwriting information, wherein
  • the handwriting information is a motion trajectory when the distance between the pen tip and the writing carrier in the painting area is 0, and the motion trajectory is within a predetermined area in the painting area.
  • the method further includes: emphasizing displaying the next stroke of the user according to the valid handwriting information.
  • the method further includes: determining and scoring according to the degree of fit of the preset painting information and the effective handwriting information.
  • the score is projected or output.
  • the score is analyzed to classify the writing of each stroke written by the user.
  • FIG. 1 is a schematic diagram of a painting and calligraphy apparatus according to an embodiment of the present disclosure
  • FIG. 2a is a schematic diagram of a drawing area viewed through a display portion of a painting and calligraphy apparatus according to an embodiment of the present disclosure
  • FIG. 2b is a schematic diagram of a user copying process according to an embodiment of the present disclosure.
  • 2c is a schematic diagram of another painting area viewed through a display portion of a painting and calligraphy apparatus according to an embodiment of the present disclosure
  • FIG. 2 is another drawing area viewed by the display unit of the painting and calligraphy apparatus according to an embodiment of the present disclosure.
  • Domain diagram
  • FIG. 3 is a schematic diagram of intelligent scoring and display of a painting and calligraphy apparatus according to an embodiment of the present disclosure
  • FIG. 3b is a schematic diagram of intelligent scoring and display of a painting and calligraphy apparatus according to an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram showing a main working flow of a painting and calligraphy apparatus according to an embodiment of the present disclosure
  • FIG. 5 is a schematic diagram of a painting and calligraphy apparatus according to an embodiment of the present disclosure.
  • 6a-6d are flowcharts of a method for assisting a calligraphy and painting according to an embodiment of the present disclosure.
  • Embodiments of the present disclosure provide a painting and calligraphy apparatus, a painting and calligraphy apparatus, and a painting and calligraphy assisting method.
  • the drawing device includes a display unit, an image acquisition unit, and a control unit.
  • the display portion is configured to display preset painting information; the image capturing portion is configured to collect an image in front of the user; the control unit is communicatively coupled to the display portion, and the control unit is configured to control the display portion to display preset painting information, and the image in front of the user is
  • the processing area is obtained, and the image light displayed by the display unit is transmitted to the eyes of the user and the preset calligraphy information is mapped in the calligraphy area.
  • the painting and painting device applies an enhanced display technology, so that the user can copy the preset painting and calligraphy information mapped in the painting and calligraphy area, improve the practice effect in the process of repeated practice, and quickly improve the user's calligraphy and painting level.
  • FIGS. 1a-1e are schematic views of a painting and calligraphy apparatus provided by the embodiment.
  • the drawing device includes a display portion 120, an image acquisition portion 130, and a control unit 150.
  • the display portion 120 in the painting device is configured to display preset painting information in front of the user; the image capturing portion 130 is configured to capture an image in front of the user; the control unit 150 is communicatively coupled to the display portion 120, and the control unit 150 is configured to control the display
  • the part 120 displays the preset painting information, and the image in front of the user is processed to obtain the painting area, and the image light displayed by the display unit 120 is transmitted to the eyes of the user and the virtual image of the preset painting information is mapped in the painting area.
  • the virtual image of the preset book drawing information refers to a virtual image formed by the preset book drawing information displayed on the display portion in the eyes of the user who uses the drawing device.
  • control unit 150 is communicatively coupled to the image acquisition unit 130.
  • the control unit 150 includes an identification module or an identification circuit configured to process the image acquisition unit 130.
  • the captured image is taken to obtain a painting area.
  • control unit 150 is communicatively coupled to the image acquisition unit 130.
  • the image acquisition unit 130 includes an identification module or an identification circuit, and the identification module or the identification circuit is configured to process the image acquisition unit. 130 captured images to obtain a painting area.
  • the painting and calligraphy apparatus further includes an identification module or an identification circuit 140, and the identification module or the identification circuit 140 is respectively connected with the image acquisition unit 130 and the control unit 150, and the identification module or the identification is recognized.
  • the circuit 140 is configured to process the image acquired by the image acquisition unit 130 to obtain a calligraphy area.
  • the above identification module refers to the implementation of the identification function by a software algorithm for execution by various types of processors.
  • the identification module can be a module implemented in a software algorithm.
  • the above identification circuit means that the identification function is implemented by hardware, that is, without considering the cost, a person skilled in the art can construct a corresponding hardware circuit to implement the identification function.
  • the hardware circuitry includes conventional very large scale integration (VLSI) circuits or gate arrays as well as existing semiconductors such as logic chips, transistors, or other discrete components.
  • VLSI very large scale integration
  • the identification circuit can also be implemented by a programmable hardware device, such as a field programmable gate array, a programmable array logic, a programmable logic device, etc., which is not limited in this embodiment.
  • the painting and calligraphy apparatus uses the image acquisition unit and the identification module or the identification circuit to collect and recognize the painting and painting area, and then displays the preset painting and painting information through the control unit and the display unit, and uses The user can see the virtual image of the preset painting and calligraphy information in the painting and calligraphy area. Therefore, the augmented reality of the preset painting and calligraphy information and the real scene is realized.
  • the painting and painting device applies Augmented Reality (AR), so that the user can copy the virtual image of the preset painting and painting information mapped in the painting and calligraphy area, and improve the practice experience and effect in the process of repeated practice, thereby increasing learning fun. Quickly improve the user's level of calligraphy and painting.
  • AR Augmented Reality
  • the identification module or identification circuit 140 shown in FIG. 1d receives the image signal transmitted by the image acquisition unit 130, the control unit 150 receives the data signal of the identification module or the identification circuit 140, and the control unit 150 transmits a display signal or the like to the display unit 120.
  • the "communication connection" may include a wired method (for example, connected by a cable or an optical fiber) and a wireless method (for example, connected via a wireless network such as wifi); further, the user herein refers to a person who is using the painting device.
  • the preset painting information includes a calligraphy graphic, that is, a graphic of calligraphy and painting.
  • the calligraphy may include a plurality of standard fonts such as a script, a song, a cursive, and a lishu, or a national script.
  • the drawing may include a plurality of drawing types such as a sketch, a white drawing, and a meticulous drawing. This embodiment does not limit this.
  • image acquisition portion 130 can include a miniature camera, for example, including a miniature depth camera for capturing depth images of objects within a user's field of view.
  • the depth image may also be referred to as a distance image, and refers to an image including a distance (depth) from a depth camera to points in the captured scene, and the embodiment includes but is not limited thereto.
  • control unit 150 can be implemented in software for execution by various types of processors.
  • the control unit 150 may be a module implemented in software when considering the level of an existing hardware process.
  • the hardware circuits include conventional ultra-large scale integrated (VLSI) circuits or gate arrays and such as logic chips, transistors and the like.
  • VLSI ultra-large scale integrated
  • the control unit 150 can also be implemented by a programmable hardware device, such as a field programmable gate array, a programmable array logic, a programmable logic device, etc., which is not limited in this embodiment.
  • the display unit 120 includes a projection unit 121 and a transflective portion 122.
  • the projection unit 121 projects a calligraphy and painting image corresponding to the preset painting information to be displayed on the display unit 120 onto the transflective portion 122.
  • the transflective portion 122 is configured to reflect the image light of the calligraphy graphic projected by the projection portion 121 into the eyes of the user, so that the human eye can view the virtual image of the calligraphy graphic; the transflective portion 122 can also transmit the painting region. So that the painting and calligraphy area also enters the eyes of the user. Therefore, the user passes through the transflective The department can see the virtual image of the calligraphy and painting graphics, as well as the painting and painting area, that is, the virtual image of the user seeing the painting and calligraphy graphic is mapped in the painting area.
  • the transflective portion 122 can include a translucent lens, and this embodiment includes, but is not limited to, this. It should be noted that FIG. 1 e schematically shows that the projection portion is located above the transflective portion, which is not limited in this embodiment.
  • the projection portion may also be located at other positions of the transflective portion, as long as the projection portion It is sufficient to project the calligraphy and painting graphics to be displayed on the display unit onto the transflectoscope.
  • FIG. 2a is a schematic diagram of a drawing area viewed by a display portion of a painting and calligraphy apparatus according to the embodiment.
  • the identification module or the identification circuit included in the control unit 150 is further configured to identify the calligraphy sub-region within the calligraphy area 200 based on the image acquired from the image acquisition unit 130. 201.
  • FIG. 2a is schematically illustrated by practicing calligraphy.
  • the painting sub-area 201 in FIG. 2a is illustrated by a border pattern, and the embodiment includes but is not limited thereto.
  • the recognition module or the identification circuit included in the image acquisition unit 130 is further configured to recognize the inside of the calligraphy area 200 based on the image acquired from the image acquisition unit 130. Book painting sub-area 201.
  • the identification module or identification circuit 140 is further configured to identify the painting sub-region 201 within the calligraphy area 200 based on the image acquired from the image acquisition unit 130.
  • the painting sub-area 201 may also be shown in other figures, for example, including a shaded square or a blank square, as long as the identification module or the identification circuit can be identified.
  • the identification module or the identification circuit recognizes the outer frame with four right angles, it is determined that the current area is the painting sub-area 201 (frame graphic), and when at least N is repeatedly recognized (according to actual demand, for example, N> 2)
  • N is repeatedly recognized
  • the painting sub-region 201 it is determined that the current region is the calligraphy region 200 including the painting sub-region 201.
  • the identification module or the identification circuit After the identification module or the identification circuit recognizes the painting sub-region 201 in the painting area 200, the identification module or the identification circuit transmits a data signal to the control unit, and the control unit includes the virtual image 210 included in the calligraphy graphic.
  • the virtual image 211 of the sub-graphic is mapped within the painting sub-region 201.
  • FIG. 2b is a schematic diagram of the user copying process provided by the embodiment.
  • the user performs a virtual image 211 of the sub-graphics of the calligraphy and painting graphic to practice calligraphy.
  • each of the painting sub-areas 201 is a word box, and the sub-picture is a graphic of a word, and the user can copy the graphic (virtual image) of the word displayed in the character box.
  • the image acquisition unit 130 is further configured to collect a depth image of the pen tip 301 used by the user, and the recognition module included in the control unit 150 Or the identification circuit is further configured to recognize the position of the nib 301 from the depth image.
  • the image capturing unit 130 is further configured to collect a depth image of the pen tip 301 used by the user, and the image capturing unit 130 includes an identification module or an identification circuit according to the identification module.
  • the depth image identifies the position of the nib 301.
  • the image acquisition portion 130 is further configured to collect a depth image of the nib 301 used by the user, and the identification module or identification circuit 140 is further configured to be depth-dependent.
  • the image identifies the position of the nib 301.
  • the recognition module or the identification circuit analyzes the depth image and locks it when it recognizes that the shape of the pen tip exists in the calligraphy area.
  • the recognition module or the identification circuit can lock the pen tip 301 that is tilted in the user's hand and then track the trajectory of the recording pen tip 301.
  • identifying the position of the nib 301 includes identifying a change in the three-dimensional coordinates of the nib 301 during motion.
  • the painting sub-region 201 is rectangular
  • the length of each of the painting sub-regions 201 in the X direction is x
  • the length in the Y direction is y
  • the interval between adjacent painting sub-regions 201 in the X direction is a
  • the interval in the Y direction is b.
  • the direction along the vertical direction of the calligraphy area 200 is the Z direction (as shown in Fig. 2b).
  • the coordinate of the plane in which the calligraphy area 200 is located along the Z direction is taken as an example, and the plane in which the calligraphy area 200 is located is also the plane in which the writing carrier 300 is located.
  • the coordinates of the right-angle position of the painting sub-region 201 are A(x1, y1, 0), B(x1+x, y1, 0), C(x1, y1+y, respectively. 0), D(x1+x, y1+y, 0), E(x1+x+a, y1, 0), F(x1, y1+y+b, 0), due to the virtual image of the sub-graphics of the calligraphy and painting graphics 211 is mapped in the painting sub-region 201.
  • the motion track that is extracted as the effective handwriting information 221 and the pen tip 301 is outside the above range is the invalid handwriting information 222.
  • the effective handwriting information is a motion trajectory when the distance between the pen tip and the writing carrier in the painting area is 0, and the motion trajectory falls into a predetermined area in the calligraphy area, for example, a painting sub-area.
  • the above limitation of the motion track of the effective handwriting information to the painting sub-area is to eliminate the erroneous writing stroke.
  • the erroneous writing stroke is excluded to better judge the writing situation in the sub-area of the painting (for example, the letter box).
  • the miswritten strokes outside the sub-area of the painting can also be regarded as a defect of the painting. Therefore, the predetermined area for judging the effective handwriting information here is not limited to the sub-area of the painting, but may be the entire painting area, or Any part of the painting area.
  • control unit is further configured to emphasize the position according to the position of the pen tip, that is, the effective handwriting information. Show the user's next stroke.
  • control unit may be configured to: in the process of extracting the effective handwriting information of the pen tip motion track stored by the recognition module or the recognition circuit, the completed stroke of the currently written word is weakened or not displayed, and the next stroke is to be written.
  • the strokes are highlighted (for example, brightened, bolded, flashed, etc.) to prompt the user for the next stroke, thereby correcting the order of the pen and preventing the user from falling down.
  • FIG. 2c is another schematic diagram of a painting area viewed by a display portion of a painting and calligraphy apparatus according to an embodiment of the present invention.
  • the example is described by taking a user's calligraphy practice as an example.
  • the identification module or the identification circuit is further configured according to the image capturing unit.
  • the provided image identifies the first character 202 written by the user.
  • the first character 202 can be any text, such as the word "good" as shown in Figure 2c.
  • This embodiment includes but is not limited thereto, and for example, the first character 202 may further include special characters ("X", " ⁇ ", etc.).
  • the data signal is transmitted to the control unit, and the control unit sets the preset position 203 around the first character 202, that is, as shown in FIG. 2c.
  • the position of the dashed box shown is then the control unit maps the virtual image 210 of the calligraphy graphic at a preset position 203 around the first character 202, i.e., within the dashed box.
  • the interval between the preset position 203 and the first character 202 in the X direction (lateral direction) is c
  • the interval between the preset position 203 and the first character 202 in the Y direction (vertical direction) is d, which is schematically illustrated in this embodiment.
  • the two preset positions 203 are listed, and the user can set the number and position of the preset position 203 according to actual needs.
  • FIG. 2d is another schematic diagram of a painting area viewed by the display unit of the painting and calligraphy apparatus according to the embodiment. As shown in FIG. 2d, the example is described by taking a drawing exercise by a user. When there is no graphic of the painting sub-area on the painting area 200 during the drawing practice, the identification module or the identification circuit is further configured to collect according to the image. The image provided by the part identifies the first character 202 written by the user.
  • the first character 202 can be a special character such as " ⁇ ", "X", and the like.
  • the data signal is transmitted to the control unit, and the control unit sets a preset position 203 around the first character 202 (the dotted frame in the figure)
  • the virtual image 210 of the calligraphy graphic ("circle" as shown in FIG. 2d) is mapped at the preset position 203 around the first character 202.
  • the distance between the preset position 203 and the first character 202 in the X direction (lateral direction) is c.
  • the embodiment includes but is not limited thereto, and the user can set the position of the preset position 203 according to actual needs.
  • the painting and calligraphy area may also include limited painting and calligraphy.
  • the sub-area graphics, the control unit maps the virtual image of the sub-graphics of the calligraphy graphic into the graphics of the painting sub-region to assist and enhance the user's practice of each sub-graphic of the calligraphy graphic.
  • the foregoing identification module or the identification circuit may be the same identification module or identification circuit, or may be divided into multiple identification modules or identification circuits to respectively identify the pen tip, the painting sub-region and the first character, and transmit the signal to control unit.
  • the painting and calligraphy apparatus further includes a memory 160.
  • FIG. 3a is a schematic diagram of intelligent scoring and display of the painting and calligraphy apparatus provided in the embodiment.
  • the painting and writing apparatus further includes a memory 160.
  • the control unit 150 is communicatively coupled to the memory 160 to read at least one of the calligraphy graphic and the effective handwriting information corresponding to the preset painting information stored in the memory 160.
  • the calligraphy and painting graphics may be the copybook data that the user wants to copy.
  • the control unit 150 judges and scores according to the degree of fit of the calligraphy graphic and the effective handwriting information.
  • the present embodiment is described by taking an example in which both the calligraphy graphic and the effective handwriting information are stored in the same memory.
  • the embodiment includes but is not limited thereto.
  • the calligraphy graphic and the effective handwriting information may also be stored in two memories, respectively, and the control unit is communicatively coupled to the two memories to read the calligraphy graphic and the effective handwriting information, respectively.
  • the present example describes a calligraphy and painting figure as a calligraphy copybook.
  • the control unit can compare each handwriting in the effective handwriting information with each word in the calligraphy copybook, and Score according to the fit between the two.
  • the "fitness" here is the degree of coincidence between each handwriting and each word in the calligraphy copy.
  • the user's score can be set to 80 points; when the fit is 85%-90%, the user's score can be set to 85 points; when the fit is 90% -95%, you can set the user's score to be 90 points; when the fit is 95%-100%, you can set the user's score to 100 points, etc.
  • this embodiment does not limit this, the user can be based on their own situation Set the relationship between fit and score.
  • the scoring of strokes includes: the control unit divides the strokes into several categories according to the strokes of common words, such as: horizontal, vertical, ⁇ , ⁇ , horizontal hook, horizontal ⁇ , etc., when the user completes writing of a word, the control unit Record the score of each stroke, and record the historical scores of the last N times (such as 100 or 1000 times) of each stroke, get the historical score curve, and compare the strokes to help users get their own wording situation. .
  • control unit 150 is further configured to transmit a scored display signal to the display section 120 to display the score result.
  • the rating can be set to each for the user's habits.
  • the word is displayed, or displayed for an average score of a line of words, or displayed for an average score of a page of words, and the score display is presented to the user through the display portion. It should be noted that regardless of the rating display, the processing unit will score and record each word.
  • FIG. 3b is a schematic diagram of intelligent scoring and display of the painting and calligraphy apparatus provided in the embodiment.
  • the painting and calligraphy apparatus further includes an external interface 170 communicatively coupled to the control unit 150, and the control unit 150 transmits the scored signal to the external interface 170.
  • the external interface 170 is connected to a computer, a mobile phone, or the like externally equipped with a client, and transmits comparison data for the user to call. For example, when the user finishes copying a picture of a book, all valid handwriting information of the page has been recorded.
  • the external user interface 400 When the user communicates with the external user interface (UI) 400 through the external interface 170 (for example, Universal Serial Bus (USB) or Bluetooth, etc.) and reads the rating and recording information, the full page copy will be displayed in the user interface, and
  • the comparison of calligraphy and painting graphics (such as comparison of left and right contrasts, comparison of transparency settings, etc.) allows users to objectively understand their level of practice.
  • the external UI refers to a UI interface specially designed for the user on a device for displaying a score, such as a computer for displaying a score, a mobile phone, or the like, and the present embodiment includes but is not limited thereto.
  • FIG. 3a and FIG. 3b can also be applied to the scoring and display of the calligraphy and painting graphics and the effective handwriting information in the drawing practice, and details are not described herein again.
  • control unit 150 is further configured to analyze the score to classify the writing of each stroke written by the user.
  • control unit 150 can classify and count the strokes of the effective handwriting information of the user, and analyze through the data, which pens are better written and drawn by the user, and which pens are not well written (drawn), and the analysis result is It can be displayed on the display unit.
  • analysis results can also be called and read by the user in the external UI.
  • the painting and calligraphy apparatus provided in this embodiment may be located on the headwear, or may be located on a desk or a chair, etc., which is not limited in this embodiment.
  • FIG. 4 is a schematic diagram of a main working flow of the painting and calligraphy apparatus provided in the embodiment.
  • the main working flow of the painting and calligraphy apparatus has been described in detail in the above content, and details are not described herein again.
  • the augmented reality technology is applied to the painting and calligraphy apparatus provided by the embodiment, and on the one hand, the user can copy the virtual image of the preset painting and painting information mapped in the painting and calligraphy area, and the user obtains objective evaluation and guidance during the repeated practice. Improve the practice experience and effect, and quickly improve the user's calligraphy and painting level; on the other hand, you can give the next suggestion in the calligraphy practice process, thus avoiding the wrong sequence of writing strokes.
  • FIG. 5 shows a painting and painting provided by the embodiment. Schematic diagram.
  • the painting and calligraphy apparatus includes a headwear 110 and any of the painting and calligraphy apparatuses provided in the first embodiment, and the painting apparatus is located on the headwear 110.
  • FIG. 5 schematically illustrates the wearing portion 110 as an eyeglass, but the embodiment is not limited thereto.
  • the wearing portion may also be a device such as a helmet worn on the user's head.
  • FIG. 5 is described by taking a painting device as a separate identification module or identification circuit 140 as an example.
  • the display unit 120 and the image capturing unit 130 are located in front of the head unit 110.
  • the identification module or the identification circuit 140 is a separate part of the painting device, the position of the identification module or the identification circuit 140 and the control unit 150 is not limited in this embodiment.
  • the identification module or the identification circuit 140 and the control unit 150 may be It is located directly in front of the headwear 110, on the side, or in the interior of the headwear 110.
  • the identification module or the identification circuit can also be part of the control unit or the image acquisition unit.
  • the painting and calligraphy device applies augmented reality technology, and on the one hand, the user can copy the virtual image of the preset painting and calligraphy information mapped in the painting and calligraphy area, and in the process of repeated practice, the user obtains objective evaluation and guiding opinions. Improve the practice experience and effect, and quickly improve the user's calligraphy and painting level; on the other hand, you can give the next suggestion in the calligraphy practice process, so as to avoid the wrong sequence of writing strokes.
  • FIG. 6a to FIG. 6d are flowcharts of a method for assisting a painting and painting provided by the embodiment. For example, as shown in FIG. 6a, specific steps include:
  • S201 Collect an image in front of the user.
  • S203 The image light of the preset painting information is transmitted to the eyes of the user and the preset painting information is mapped in the painting area.
  • the virtual image of the preset painting information is mapped in the painting area.
  • an image in front of the user can be acquired by the image acquisition unit.
  • control unit is communicably connected to the image acquisition unit, and the recognition area may be identified according to the image by an identification module or an identification circuit included in the image acquisition unit.
  • control unit is communicatively coupled to the image acquisition unit, and the recognition area may be identified according to the image by an identification module or an identification circuit included in the control unit.
  • the painting area may be identified according to an image by a separate identification module or an identification circuit, and the identification module or the identification circuit in the present example and the image acquisition unit and the control respectively Unit communication connection.
  • the above identification module refers to the implementation of the identification function by a software algorithm for execution by various types of processors.
  • the identification module can be a module implemented in a software algorithm.
  • the above identification circuit means that the identification function is implemented by hardware, that is, without considering the cost, a person skilled in the art can construct a corresponding hardware circuit to implement the identification function.
  • the hardware circuitry includes conventional very large scale integration (VLSI) circuits or gate arrays as well as existing semiconductors such as logic chips, transistors, or other discrete components.
  • VLSI very large scale integration
  • the identification circuit can also be implemented by a programmable hardware device, such as a field programmable gate array, a programmable array logic, a programmable logic device, etc., which is not limited in this embodiment.
  • the preset painting information includes a calligraphy graphic, that is, a graphic of calligraphy and painting.
  • the preset painting information can be displayed in front of the user through the display portion.
  • control unit may be communicatively coupled to the display unit, the control unit configured to control the display unit to display the preset painting information, the image light displayed by the display portion is transmitted to the user's eye, and the virtual image of the preset calligraphy information is mapped in the calligraphy area.
  • the display unit can reflect the calligraphy and painting pattern into the eyes of the user, and the user can also view the painting area through the display unit. Therefore, the user can see that the virtual image of the calligraphy and painting graphic is mapped in the painting area.
  • the painting and painting assisting method utilizes the collected image, and recognizes the painting area in the image, and then transmits the image light of the preset painting information to the user's eye and maps the virtual image of the preset painting information in the painting area.
  • the user can see the virtual image of the preset calligraphy and painting information in the painting and calligraphy area, thus realizing the augmented reality of the preset painting and painting information and the real scene.
  • the painting aiding method applies Augmented Reality (AR), which enables the user to copy the virtual image of the preset painting and painting information mapped in the painting and calligraphy area, and quickly improve the user's calligraphy and painting level in the process of repeated practice.
  • AR Augmented Reality
  • the method for assisting painting and calligraphy provided by this embodiment further includes:
  • the painting sub-area of the calligraphy area may be identified from the image by the above-described identification module or identification circuit.
  • the embodiment is not limited thereto, and for example, the sub-region can be identified by the recognition module or the recognition circuit according to the image.
  • S212 Map the sub-graph of the preset calligraphy information in the painting sub-region.
  • the virtual image of the sub-graphic of the preset painting information is mapped in the painting sub-region.
  • the booklet sub-region is taken as a frame graphic as an example, but is not limited thereto.
  • the painting sub-region may also be shown in other graphics, for example, including a shaded square or a blank square, as long as it can be recognized by the recognition module or the identification circuit.
  • the recognition module or the recognition circuit recognizes a frame having four right angles, it is determined that the current area is a painting sub-area (frame graphic), and when at least N is repeatedly recognized (according to actual demand, for example, N>2)
  • the sub-regions are painted, it is determined that the current region is a painting region including a sub-region of the painting.
  • the identification module or the identification circuit recognizes the painting sub-region in the painting area
  • the identification module or the identification circuit transmits the data signal to the control unit, and the control unit maps the virtual image of the sub-graphic of the calligraphy graphic in the painting sub-region, the user pairs The virtual image of the sub-graphics of the calligraphy and painting graphics is copied to practice calligraphy.
  • the method for assisting painting and calligraphy provided by this embodiment further includes:
  • S222 Map preset book painting information around the first character and at a preset position in the calligraphy area.
  • the virtual image of the preset calligraphy information is mapped around the first character and at a preset position within the calligraphy area.
  • the present example is described by taking a user's calligraphy practice as an example.
  • the identification module or the identification circuit is further configured to identify the user according to the image provided by the image capturing unit. The first character written.
  • the data signal is transmitted to the control unit, and the control unit sets a preset position around the first character, and then the control unit displays the virtual image of the calligraphy graphic Maps at a preset position around the first character. The user can set the number and position of the preset position according to actual needs.
  • the above method can also play an auxiliary role in the practice of painting, and will not be described here.
  • the method for assisting painting and calligraphy provided by this embodiment further includes:
  • S231 Collect a depth image of the nib in front of the user.
  • S232 Identify a change of the three-dimensional coordinates of the nib during the movement according to the depth image to extract valid handwriting information.
  • the effective handwriting information is a motion trajectory when the distance between the pen tip and the writing carrier in the painting area is 0, and the motion trajectory falls into a predetermined area in the calligraphy area, for example, a painting sub-area.
  • a predetermined area in the calligraphy area for example, a painting sub-area.
  • the above limits the motion track of the effective handwriting information to the painting sub-area to eliminate miswriting Strokes.
  • the erroneous writing stroke is excluded to better judge the writing situation in the sub-area of the painting (for example, the letter box).
  • the miswritten strokes outside the sub-area of the painting can also be regarded as a defect of the painting. Therefore, the predetermined area for judging the effective handwriting information here is not limited to the sub-area of the painting, but may be the entire painting area, or Any part of the painting area.
  • the painting assisting method provided by the embodiment further includes emphasizing the next stroke of the user according to the valid handwriting information.
  • the control unit may be configured to: in the process of extracting the effective handwriting information of the nib movement track recognized by the recognition module or the recognition circuit, weakening or not displaying the completed stroke in the currently written word, and writing the next stroke
  • the strokes are highlighted (for example, brightened, bolded, flashed, etc.) to prompt the user for the next stroke, thereby correcting the order of the pen and preventing the user from falling down.
  • the painting assisting method provided in this embodiment further includes determining and scoring according to the degree of fit of the drawing and drawing graphics corresponding to the preset painting information and the effective handwriting information.
  • the present example describes a calligraphy and painting figure as a calligraphy copybook.
  • the control unit can compare each handwriting in the effective handwriting information with each word in the calligraphy copybook, and Score according to the fit between the two.
  • the “fitness” is the degree of coincidence between each handwriting and each word in the calligraphy copy.
  • the user's score can be set to 80 points; when the fit is 85%-90%, the user's score can be set to 85 points; when the fit is 90% -95%, you can set the user's score to be 90 points; when the fit is 95%-100%, you can set the user's score to 100 points, etc.
  • this embodiment does not limit this, the user can be based on their own situation Set the relationship between fit and score.
  • the scoring of strokes includes: the control unit divides the strokes into several categories according to the strokes of common words, such as: horizontal, vertical, ⁇ , ⁇ , horizontal hook, horizontal ⁇ , etc., when the user completes writing of a word, the control unit Record the score of each stroke, and record the historical scores of the last N times (such as 100 or 1000 times) of each stroke, get the historical score curve, and compare the strokes to help users get their own wording situation. .
  • the painting assisting method provided in this embodiment further includes performing projection display or output on the score.
  • control unit may transmit a display signal of the score to the display section to display the score result.
  • the score may be set to be displayed for each word according to the user's habit, or may be displayed for an average score of a line of words, or may be displayed for an average score of a page of words, and the score display is presented to the user through the display portion. . It should be noted that no matter which rating is displayed, the processing unit will enter each word. Line comparison scores and records.
  • control unit can also send a scored signal to the external interface.
  • a scored signal For example, when the user finishes copying a picture of a book, all valid handwriting information of the page has been recorded.
  • UI user interface
  • USB Universal Serial Bus
  • the score and record information are read, the entire page copy will be displayed on the user interface and executed with the calligraphy and graphics graphic. Comparison (such as left and right split screen comparison or coincidence comparison, etc.), so that users can objectively understand their level of practice.
  • the painting assisting method provided by the embodiment further includes analyzing the score to classify the writing condition of each stroke written by the user.
  • the control unit can classify and count the strokes of the effective handwriting information of the user, and analyze through the data, which pens are better written and drawn by the user, and which pens are not well written (drawn), and the analysis result can be Displayed by the display unit.
  • the analysis results can also be called and read by the user in the external UI.
  • the user can copy the virtual image of the preset painting and calligraphy information mapped in the painting and calligraphy area, and the user can obtain an objective evaluation in the process of repeated practice. And guidance, quickly improve the practitioner's calligraphy and painting level; on the other hand, you can give the next suggestion in the calligraphy practice process, so as to avoid the wrong sequence of writing strokes.
  • a painting apparatus comprising: a processor; a memory; and computer program instructions stored in the memory, the computer program instructions being executed when the computer program instructions are executed by the processor.

Abstract

一种书画装置、书画设备及书画辅助方法。该书画装置包括:显示部(120),被配置为显示预设书画信息;图像采集部(130),被配置为采集用户面前的图像;控制单元(150),与显示部(120)通信连接,控制单元(150)被配置为控制显示部(120)显示预设书画信息,其中,用户面前的图像经处理得到书画区域,显示部(120)显示的图像光被传输到用户的眼中且预设书画信息被映射在书画区域中。该书画装置应用增强显示技术,使用户对映射在书画区域中的预设书画信息进行临摹,在反复练习过程中,提高练习体验和效果。

Description

书画装置、书画设备及书画辅助方法
本申请要求于2017年5月5日递交的中国专利申请第201710313042.4号的优先权,在此全文引用上述中国专利申请公开的内容以作为本申请的一部分。
技术领域
本公开至少一个实施例涉及一种书画装置、书画设备及书画辅助方法。
背景技术
随着科学技术的发展,越来越多的传统行业融入新的技术内容。书法和绘画的学习过程基本靠个人主观的努力,学习过程中也需要个人的揣摩和反复练习。在书写的过程中,讲究书写比划顺序以及下笔的力度、停顿等诸多方面。
发明内容
本公开的至少一实施例提供一种书画装置、书画设备及书画辅助方法。该书画装置应用增强显示技术,使用户对映射在书画区域中的预设书画信息进行临摹,在反复练习的过程中,提高练习的体验和效果,快速提高用户的书法和绘画水平。
本公开的至少一实施例提供一种书画装置,包括:显示部,被配置为显示预设书画信息;图像采集部,被配置为采集用户面前的图像;控制单元,与显示部通信连接,控制单元被配置为控制显示部显示预设书画信息,其中,用户面前的图像经处理得到书画区域,显示部显示的图像光被传输到用户的眼中且预设书画信息被映射在书画区域中。
例如,在本实施例一示例提供的书画装置中,控制单元与图像采集部通信连接,控制单元包括识别模块,识别模块被配置为处理图像采集部采集的图像以得到书画区域。
例如,在本实施例一示例提供的书画装置中,识别模块还被配置为根据图像识别书画区域内的书画子区域,控制单元控制显示部以使预设书画信息的子图形映射在书画子区域内。
例如,在本实施例一示例提供的书画装置中,识别模块还被配置为根据图 像识别用户书写的第一个字符,控制单元控制显示部以使预设书画信息映射在第一个字符周围且在书画区域内的预设位置处。
例如,在本实施例一示例提供的书画装置中,图像采集部还被配置为采集用户所使用的笔尖的深度图像,识别模块还被配置为根据深度图像识别笔尖在运动过程中的三维坐标的变化以使控制单元提取有效笔迹信息,有效笔迹信息为笔尖与书画区域内的书写载体的距离为0时的运动轨迹,且运动轨迹在书画区域中的预定区域内。
例如,在本实施例一示例提供的书画装置中,控制单元与图像采集部通信连接,控制单元包括识别电路,识别电路被配置为处理图像采集部采集的图像以得到书画区域。
例如,在本实施例一示例提供的书画装置中,识别电路还被配置为根据图像识别书画区域内的书画子区域,控制单元控制显示部以使预设书画信息的子图形映射在书画子区域内。
例如,在本实施例一示例提供的书画装置中,识别电路还被配置为根据图像识别用户书写的第一个字符,控制单元控制显示部以使预设书画信息映射在第一个字符周围且在书画区域内的预设位置处。
例如,在本实施例一示例提供的书画装置中,图像采集部还被配置为采集用户所使用的笔尖的深度图像,识别电路还被配置为根据深度图像识别笔尖在运动过程中的三维坐标的变化以使控制单元提取有效笔迹信息,有效笔迹信息为笔尖与书画区域内的书写载体的距离为0时的运动轨迹,且运动轨迹在书画区域中的预定区域内。
例如,在本实施例一示例提供的书画装置中,控制单元与图像采集部通信连接,图像采集部包括识别模块,识别模块被配置为处理图像以得到书画区域。
例如,在本实施例一示例提供的书画装置中,识别模块还被配置为根据图像识别书画区域内的书画子区域,控制单元还被配置为控制显示部以使将预设书画信息的子图形映射在书画子区域内。
例如,在本实施例一示例提供的书画装置中,识别模块还被配置为根据图像识别用户书写的第一个字符,控制单元还被配置为控制显示部以使预设书画信息映射在第一个字符周围且在书画区域内的预设位置处。
例如,在本实施例一示例提供的书画装置中,图像采集部还被配置为采集用户所使用的笔尖的深度图像,识别模块根据深度图像识别笔尖在运动过程中 的三维坐标的变化并发送给控制单元以提取有效笔迹信息,有效笔迹信息为笔尖与书画区域内的书写载体的距离为0时的运动轨迹,且运动轨迹在书画区域中的预定区域内。
例如,在本实施例一示例提供的书画装置中,控制单元与图像采集部通信连接,图像采集部包括识别电路,识别电路被配置为处理图像以得到书画区域。
例如,在本实施例一示例提供的书画装置中,识别电路还被配置为根据图像识别书画区域内的书画子区域,控制单元还被配置为控制显示部以使预设书画信息的子图形映射在书画子区域内。
例如,在本实施例一示例提供的书画装置中,识别电路还被配置为根据图像识别用户书写的第一个字符,控制单元还被配置为控制显示部以使预设书画信息映射在第一个字符周围且在书画区域内的预设位置处。
例如,在本实施例一示例提供的书画装置中,图像采集部还被配置为采集用户所使用的笔尖的深度图像,识别电路根据深度图像识别笔尖在运动过程中的三维坐标的变化并发送给控制单元以提取有效笔迹信息,有效笔迹信息为笔尖与书画区域内的书写载体的距离为0时的运动轨迹,且运动轨迹在书画区域中的预定区域内。
例如,在本实施例一示例提供的书画装置中,还包括识别模块,识别模块分别与图像采集部以及控制单元通信连接,识别模块被配置为处理图像采集部采集的图像以得到书画区域。
例如,在本实施例一示例提供的书画装置中,识别模块还被配置为根据图像识别书画区域内的书画子区域,控制单元还被配置为控制显示部以使预设书画信息的子图形映射在书画子区域内。
例如,在本实施例一示例提供的书画装置中,识别模块还被配置为根据图像识别用户书写的第一个字符,控制单元还被配置为控制显示部以使预设书画信息映射在第一个字符周围且在书画区域内的预设位置处。
例如,在本实施例一示例提供的书画装置中,图像采集部还被配置为采集用户所使用的笔尖的深度图像,识别模块还被配置为根据深度图像识别笔尖在运动过程中的三维坐标的变化并发送给控制单元以提取有效笔迹信息,有效笔迹信息为笔尖与书画区域内的书写载体的距离为0时的运动轨迹,且运动轨迹在书画区域中的预定区域内。
例如,在本实施例一示例提供的书画装置中,还包括识别电路,识别电路 分别与图像采集部以及控制单元通信连接,识别电路被配置为处理图像采集部采集的图像以得到书画区域。
例如,在本实施例一示例提供的书画装置中,识别电路还被配置为根据图像识别书画区域内的书画子区域,控制单元还被配置为控制显示部以使预设书画信息的子图形映射在书画子区域内。
例如,在本实施例一示例提供的书画装置中,识别电路还被配置为根据图像识别用户书写的第一个字符,控制单元还被配置为控制显示部以使预设书画信息映射在第一个字符周围且在书画区域内的预设位置处。
例如,在本实施例一示例提供的书画装置中,图像采集部还被配置为采集用户所使用的笔尖的深度图像,识别电路还被配置为根据深度图像识别笔尖在运动过程中的三维坐标的变化并发送给控制单元以提取有效笔迹信息,有效笔迹信息为笔尖与书画区域内的书写载体的距离为0时的运动轨迹,且运动轨迹在书画区域中的预定区域内。
例如,在本实施例一示例提供的书画装置中,控制单元还被配置为根据有效笔迹信息,强调显示用户下一笔的笔画。
例如,在本实施例一示例提供的书画装置中,控制单元还被配置为根据预设书画信息与有效笔迹信息的契合度进行判断并评分,并将评分的显示信号发送给显示部以显示评分结果。
例如,在本实施例一示例提供的书画装置中,还包括:外部接口,与控制单元通信连接,控制单元还被配置为根据预设书画信息与有效笔迹信息的契合度进行判断并评分,并将评分的信号发送到外部接口。
例如,在本实施例一示例提供的书画装置中,还包括:存储器,预设书画信息和有效笔迹信息至少之一存储在存储器中。
例如,在本实施例一示例提供的书画装置中,控制单元还被配置为对评分进行分析以对用户书写的每一笔画的书写情况进行分类。
例如,在本实施例一示例提供的书画装置中,显示部包括:投影部和半透半反部,投影部将在显示部显示的预设书画信息投影到半透半反部,且半透半反部将投影部投射的预设书画信息的图像光反射到用户的眼中。
本公开的至少一实施例提供一种书画设备,包括:头戴部;以及书画装置,位于头戴部上,其中,书画装置包括本公开任一实施例提供的书画装置。
本公开的至少一实施例提供一种书画辅助方法,包括:采集用户面前的图 像;根据图像识别书画区域;将预设书画信息的图像光传输到用户的眼中并将预设书画信息映射在书画区域中。
例如,在本实施例一示例提供的书画辅助方法中,还包括:根据图像识别书画区域内的书画子区域;将预设书画信息的子图形映射在书画子区域内。
例如,在本实施例一示例提供的书画辅助方法中,还包括:根据图像识别用户书写的第一个字符;将预设书画信息映射在第一个字符周围且在书画区域内的预设位置处。
例如,在本实施例一示例提供的书画辅助方法中,还包括:采集用户面前的笔尖的深度图像;根据深度图像识别笔尖在运动过程中的三维坐标的变化以提取有效笔迹信息,其中,有效笔迹信息为笔尖与书画区域内的书写载体的距离为0时的运动轨迹,且运动轨迹在书画区域中的预定区域内。
例如,在本实施例一示例提供的书画辅助方法中,还包括:根据有效笔迹信息,强调显示用户下一笔的笔画。
例如,在本实施例一示例提供的书画辅助方法中,还包括:根据预设书画信息和有效笔迹信息的契合度进行判断并评分。
例如,在本实施例一示例提供的书画辅助方法中,对评分进行投影显示或输出。
例如,在本实施例一示例提供的书画辅助方法中,分析评分以对用户书写的每一笔画的书写情况进行分类。
附图说明
为了更清楚地说明本公开实施例的技术方案,下面将对实施例的附图作简单地介绍,显而易见地,下面描述中的附图仅仅涉及本公开的一些实施例,而非对本公开的限制。
图1a-图1e为本公开一实施例提供的书画装置示意图;
图2a为本公开一实施例提供的通过书画装置的显示部观看的书画区域示意图;
图2b为本公开一实施例提供的用户临摹过程示意图;
图2c为本公开一实施例提供的另一种通过书画装置的显示部观看的书画区域示意图;
图2d为本公开一实施例提供的另一种通过书画装置显示部观看的书画区 域示意图;
图3a为本公开一实施例提供的书画装置进行智能评分及显示的示意图;
图3b为本公开一实施例提供的书画装置进行智能评分及显示的示意图;
图4为本公开一实施例提供的书画装置的主要工作流程示意图;
图5为本公开一实施例提供的书画设备示意图;
图6a-图6d为本公开一实施例提供的书画辅助方法流程图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例的附图,对本公开实施例的技术方案进行清楚、完整地描述。显然,所描述的实施例是本公开的一部分实施例,而不是全部的实施例。基于所描述的本公开的实施例,本领域普通技术人员在无需创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
除非另外定义,本公开使用的技术术语或者科学术语应当为本公开所属领域内具有一般技能的人士所理解的通常意义。本公开中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。“包括”或者“包含”等类似的词语意指出现该词前面的元件或者物件涵盖出现在该词后面列举的元件或者物件及其等同,而不排除其他元件或者物件。“上”、“下”、“左”、“右”等仅用于表示相对位置关系,当被描述对象的绝对位置改变后,则该相对位置关系也可能相应地改变。
在研究中,本申请的发明人发现:一般在书法和绘画的自主练习中,容易出现仅靠用户主观判断,得不到客观的评价与指导意见的情况。而在书法的练习过程中极易出现书写笔画顺序不对的情况。
本公开的实施例提供一种书画装置、书画设备及书画辅助方法。该书画装置包括:显示部、图像采集部以及控制单元。显示部被配置为显示预设书画信息;图像采集部被配置为采集用户面前的图像;控制单元与显示部通信连接,控制单元被配置为控制显示部显示预设书画信息,用户面前的图像经处理得到书画区域,显示部显示的图像光被传输到用户的眼中且预设书画信息被映射在书画区域中。该书画装置应用增强显示技术,使用户可以对映射在书画区域中的预设书画信息进行临摹,在反复练习的过程中,提高练习效果,快速提高用户的书法和绘画水平。
下面结合附图对本公开实施例提供的书画装置、书画设备及书画辅助方法进行描述。
本公开一实施例提供一种书画装置,图1a-图1e为本实施例提供的书画装置示意图。如图1a所示,该书画装置包括显示部120、图像采集部130以及控制单元150。书画装置中的显示部120被配置为在用户面前显示预设书画信息;图像采集部130被配置为采集用户面前的图像;控制单元150与显示部120通信连接,控制单元150被配置为控制显示部120显示预设书画信息,用户面前的图像经处理得到书画区域,显示部120显示的图像光被传输到用户的眼中且预设书画信息的虚像映射在书画区域中。上述的预设书画信息的虚像是指显示部上显示的预设书画信息在使用该书画装置的用户的眼睛中形成的虚像。
例如,如图1b所示,本实施例的一示例中,控制单元150与图像采集部130通信连接,控制单元150包括识别模块或者识别电路,识别模块或者识别电路被配置为处理图像采集部130采集的图像以得到书画区域。
例如,如图1c所示,本实施例的一示例中,控制单元150与图像采集部130通信连接,图像采集部130包括识别模块或者识别电路,识别模块或者识别电路被配置为处理图像采集部130采集的图像以得到书画区域。
例如,如图1d所示,本实施例的一示例中,书画装置还包括识别模块或者识别电路140,识别模块或者识别电路140分别与图像采集部130以及控制单元150通信连接,识别模块或者识别电路140被配置为处理图像采集部130采集的图像以得到书画区域。
上述的识别模块指用软件算法实现识别功能,以便由各种类型的处理器执行。例如,考虑到现有硬件工艺的水平,识别模块可以为以软件算法实现的模块。
上述的识别电路指用硬件实现识别功能,即在不考虑成本的情况下,本领域技术人员都可以搭建对应的硬件电路来实现识别功能。例如,该硬件电路包括常规的超大规模集成(VLSI)电路或者门阵列以及诸如逻辑芯片、晶体管之类的现有半导体或者是其它分立的元件。例如,识别电路还可以用可编程硬件设备,诸如现场可编程门阵列、可编程阵列逻辑、可编程逻辑设备等实现,本实施例对此不作限制。
本实施例提供的书画装置利用图像采集部以及识别模块或者识别电路采集并识别出书画区域,然后通过控制单元以及显示部显示预设书画信息,使用 户可以在书画区域看到预设书画信息的虚像,因此,实现了预设书画信息与现实场景的增强现实。该书画装置应用增强现实技术(Augmented Reality,AR),使用户可以对映射在书画区域中的预设书画信息的虚像进行临摹,在反复练习的过程中,提高练习体验和效果,增加学习乐趣,快速提高用户的书法和绘画水平。
需要说明的是,上述的“通信连接”在图中以连接直线表示,是指可相互传输或接收数据信息。
例如,图1d所示的识别模块或者识别电路140接收图像采集部130传输的图像信号,控制单元150接收识别模块或者识别电路140的数据信号,控制单元150向显示部120发送显示信号等。该“通信连接”可以包括有线的方式(例如通过电缆或光纤相连)和无线的方式(例如通过wifi等无线网络相连);此外,这里的用户是指正在使用该书画装置的人。
例如,预设书画信息包括书画图形,即,书法和绘画的图形。例如,书法可以包括楷书、宋体、草书和隶书等多种标准字体或者各国文字等,绘画可以包括素描、白描和工笔画等多种绘画类型,本实施例对此不作限制。
例如,图像采集部130可以包括微型摄像头,例如包括微型深度摄像头,用于采集用户视野内的物体的深度图像。该深度图像也可被称为距离影像,是指包括从深度摄像头到所拍摄的场景中各点的距离(深度)的图像,本实施例包括但不限于此。
例如,控制单元150可以用软件实现,以便由各种类型的处理器执行。例如,在考虑到现有硬件工艺的水平时,控制单元150可以为以软件实现的模块。在不考虑成本的情况下,本领域技术人员都可以搭建对应的硬件电路来实现对应的功能,该硬件电路包括常规的超大规模集成(VLSI)电路或者门阵列以及诸如逻辑芯片、晶体管之类的现有半导体或者是其它分立的元件。例如,控制单元150还可以用可编程硬件设备,诸如现场可编程门阵列、可编程阵列逻辑、可编程逻辑设备等实现,本实施例对此不作限制。
例如,如图1e所示,显示部120包括投影部121和半透半反部122,投影部121将要在显示部120显示的预设书画信息对应的书画图形投影到半透半反部122上,且半透半反部122被配置为将投影部121投射的书画图形的图像光反射到用户的眼中,使人眼能够观看到书画图形的虚像;半透半反部122还可以透射书画区域,使书画区域也进入到用户的眼中。因此,用户通过半透半反 部既可以看到书画图形的虚像,也可以看到书画区域,即,用户看到书画图形的虚像映射在书画区域内。
例如,半透半反部122可以包括半透式镜片,本实施例包括但不限于此。需要说明的是,图1e示意性的示出投影部位于半透半反部的上方,本实施例对此不作限制,例如,投影部还可以位于半透半反部的其他位置,只要投影部能将要在显示部显示的书画图形投影到半透半反部上即可。
例如,图2a为本实施例提供的通过书画装置的显示部观看的书画区域示意图。本实施例的一示例中,如图1b和图2a所示,控制单元150包括的识别模块或者识别电路还被配置为根据从图像采集部130获取的图像来识别书画区域200内的书画子区域201。图2a示意性的以练习书法为例进行描述,图2a中书画子区域201以边框图形示出,本实施例包括但不限于此。
例如,本实施例的一示例中,如图1c和图2a所示,图像采集部130包括的识别模块或者识别电路还被配置为根据从图像采集部130获取的图像来识别书画区域200内的书画子区域201。
例如,本实施例的一示例中,如图1d和图2a所示,识别模块或者识别电路140还被配置为根据从图像采集部130获取的图像来识别书画区域200内的书画子区域201。
例如,书画子区域201还可以以其他图形示出,例如,包括阴影方格或空白方格等图形,只要能够使识别模块或者识别电路识别即可。例如,当识别模块或者识别电路识别到带有四个直角的外框时,判定当前区域为书画子区域201(边框图形),当识别到至少重复出现N(根据实际需求设定,例如N>2)个书画子区域201时,判定当前区域为包含书画子区域201的书画区域200。
例如,如图2a所示,在识别模块或者识别电路识别书画区域200内的书画子区域201后,识别模块或者识别电路将数据信号发送给控制单元,控制单元将书画图形的虚像210中包括的子图形的虚像211映射在书画子区域201内。
例如,图2b为本实施例提供的用户临摹过程示意图,如图2a和图2b所示,用户对书画图形的子图形的虚像211进行临摹以练习书法。例如,每个书画子区域201为一个字框,子图形为字的图形,用户可以临摹显示在字框中的字的图形(虚像)。
例如,本实施例的一示例中,如图1b和图2b所示,图像采集部130还被配置为采集用户所使用的笔尖301的深度图像,控制单元150包括的识别模块 或者识别电路还被配置为根据深度图像识别笔尖301的位置。
例如,本实施例的一示例中,如图1c和图2b所示,图像采集部130还被配置为采集用户所使用的笔尖301的深度图像,图像采集部130包括的识别模块或者识别电路根据深度图像识别笔尖301的位置。
例如,本实施例的一示例中,如图1d和图2b所示,图像采集部130还被配置为采集用户所使用的笔尖301的深度图像,识别模块或者识别电路140还被配置为根据深度图像识别笔尖301的位置。
例如,识别模块或者识别电路对深度图像进行分析,当识别到书画区域内存在笔尖形状时,对其进行锁定。例如,识别模块或者识别电路可以锁定用户手中倾斜的笔尖301,然后跟踪记录笔尖301的运动轨迹。
例如,如图2a和图2b所示,识别笔尖301的位置包括:识别笔尖301在运动过程中的三维坐标的变化。例如,假设书画子区域201为矩形,每个书画子区域201沿X方向的长度为x,沿Y方向的长度为y,相邻的书画子区域201之间沿X方向的间隔为a,沿Y方向的间隔为b。以沿书画区域200垂直向上的方向为Z方向(如图2b所示)。需要说明的是,本实施例以书画区域200所在的平面沿Z方向的坐标为Z=0为例进行描述,并且书画区域200所在的平面也为书写载体300所在的平面。
例如,如图2a和图2b所示,书画子区域201的直角位置的坐标分别为A(x1,y1,0),B(x1+x,y1,0),C(x1,y1+y,0),D(x1+x,y1+y,0),E(x1+x+a,y1,0),F(x1,y1+y+b,0),由于书画图形的子图形的虚像211映射在书画子区域201内,因此,笔尖301的坐标(x0,y0,z0)满足x1≤x0≤x1+x,y1≤y0≤y1+y,z0=0时,笔尖301的运动轨迹可以被提取为有效笔迹信息221,而笔尖301在上述范围之外的运动轨迹即为无效笔迹信息222。
需要说明的是,本实施例以有效笔迹信息为笔尖与书画区域内的书写载体的距离为0时的运动轨迹,且该运动轨迹落入书画区域中的预定区域,例如,书画子区域。上述将有效笔迹信息的运动轨迹限制到书画子区域内是排除误写笔画。排除误写笔画是为了更好地判断位于书画子区域(例如,字框)内的书写情况。然而,书画子区域之外的误写笔画也可以看作是一种书画的缺陷,因此,这里判断有效笔迹信息的预定区域也并不限制于书画子区域,而也可以是整个书画区域,或者书画区域的任意一部分。
例如,控制单元还被配置为根据笔尖的位置,即,有效笔迹信息,强调显 示用户下一笔的笔画。例如,可以将控制单元设置为:在提取识别模块或者识别电路存储的笔尖运动轨迹的有效笔迹信息的过程中,对当前书写的字中已完成的笔画弱化或不显示,对下一笔将要书写的笔画进行强调显示(例如,增亮、加粗、闪动等)以提示用户下一笔的笔画,从而实现纠正下笔顺序,避免用户出现倒下笔的情况。
例如,图2c为本实施例提供的另一种通过书画装置的显示部观看的书画区域示意图。如图2c所示,本示例以用户进行书法练习为例进行描述,当书画区域200上没有书画子区域(例如,字框等图形)时,识别模块或者识别电路还被配置为根据图像采集部提供的图像识别用户书写的第一个字符202。例如,该第一个字符202可以为任一个文字,如图2c所示的“好”字。本实施例包括但不限于此,例如,该第一个字符202还可以包括特殊字符(“X”、“Δ”等)。当识别模块或者识别电路识别到用户书写的第一个字符202后,将该数据信号传输给控制单元,控制单元对该第一个字符202的周围设定预设位置203,即,如图2c所示的虚线方框所在位置,然后控制单元将书画图形的虚像210映射在第一个字符202周围的预设位置203处,即,映射在虚线方框内。例如,预设位置203沿X方向(横向)距第一个字符202的间隔为c,预设位置203沿Y方向(竖向)距第一个字符202的间隔为d,本实施例示意性的列举2个预设位置203,用户可根据实际需要对预设位置203的个数及位置进行设定。
例如,图2d为本实施例提供的另一种通过书画装置显示部观看的书画区域示意图。如图2d所示,本示例以用户进行绘画练习为例进行描述,在进行绘画练习的过程中当书画区域200上没有书画子区域的图形时,识别模块或者识别电路还被配置为根据图像采集部提供的图像识别用户书写的第一个字符202。例如,该第一个字符202可以为包括特殊字符,如“Δ”、“X”等。当识别模块或者识别电路识别到用户书写的第一个字符202后,将该数据信号传输给控制单元,控制单元对该第一个字符202的周围设定预设位置203(图中的虚线框),然后将书画图形的虚像210(如图2d所示的“圆”)映射在第一个字符202周围的预设位置203处。例如,预设位置203沿X方向(横向)距第一个字符202的距离为c,本实施例包括但不限于此,用户可根据实际需要对预设位置203的位置进行设定。
需要说明的是,当用户绘画功底较差时,书画区域内也可以包括限定书画 子区域的图形,控制单元将书画图形的子图形的虚像映射在书画子区域的图形内,以辅助并强化用户对书画图形的每个子图形的练习。另外,上述识别模块或者识别电路可以为同一个识别模块或者识别电路,也可以分为多个识别模块或者识别电路以对笔尖、书画子区域以及第一个字符分别进行识别,并将信号传输给控制单元。
例如,根据一些实施例的书画装置还包括存储器160。图3a为本实施例提供的书画装置进行智能评分及显示的示意图。如图3a所示,该书画装置还包括存储器160,控制单元150与存储器160通信连接以读取存储器160中存储的预设书画信息对应的书画图形和有效笔迹信息的至少之一。例如书画图形可以为用户想进行临摹的字帖数据。控制单元150根据书画图形和有效笔迹信息的契合度进行判断并评分。需要说明的是,本实施例以书画图形和有效笔迹信息均存储在同一个存储器里为例进行描述,本实施例包括但不限于此。例如,书画图形和有效笔迹信息还可以分别存储在两个存储器中,且控制单元分别与两个存储器通信连接以读取书画图形和有效笔迹信息。
例如,本示例以书画图形为书法字帖为例进行描述,控制单元在读取书画图形和有效笔迹信息之后,可以对有效笔迹信息中的每个笔迹与书法字帖中的每个字进行比较,并按照两者之间的契合度进行评分。这里的“契合度”即为每个笔迹与书法字帖中的每个字的重合程度。例如,当契合度为80%-85%时,可以设定用户所得成绩为80分;当契合度为85%-90%时,可以设定用户所得成绩为85分;当契合度为90%-95%时,可以设定用户所得成绩为90分;当契合度为95%-100%时,可以设定用户所得成绩为100分等,本实施例对此不作限制,用户可根据自身情况对契合度与评分的关系进行设定。
例如,还可以通过拆分有效笔迹信息中的每个笔迹的笔画,按照有效笔迹信息中的每个笔画与书画图形中的每个笔画的重合度进行打分,然后取平均分数。例如,笔画的打分包括:控制单元按照常见字的笔画,将笔画分成几大类,如:横、竖、撇、捺、横钩、横撇等,当用户完成一个字的书写后,控制单元记录每个笔画的得分,并记录每一种笔画最近N次(如100或1000次)的历史得分情况,得出历史得分曲线,并进行笔画间的对比,从而帮助用户得到自己的练字情况。
例如,如图3a所示,控制单元150还被配置为将评分的显示信号发送给显示部120以显示评分结果。例如,该评分可按照用户习惯,设置成针对每个 字进行显示,或针对一行字的平均分进行显示,或针对一页字的平均分进行显示等,该评分显示通过显示部呈现在用户眼前。需要说明的是,无论哪种评分显示,处理单元都会对每个字进行对比打分并记录。
例如,图3b为本实施例提供的书画装置进行智能评分及显示的示意图。如图3b所示,书画装置还包括外部接口170,与控制单元150通信连接,控制单元150将评分的信号发送到外部接口170。例如,外部接口170与外部装有客户端的电脑、手机等设备连接,传输比较数据以供用户调用。例如,当用户完成对一页书画图形的临摹时,该页所有有效笔迹信息已记录完成。当用户通过外部接口170(例如,通用串口总线(USB)或蓝牙等)与外部用户界面(UI)400通信连接,并读取评分和记录信息时,整页字帖将在用户界面显示,并与书画图形进行比较(如左右对比比较、重合设置透明度的比较等),使用户能客观了解自己的练字水平。例如,外部UI指用户在用于显示评分的电脑,手机等安装有客户端的设备上,针对该产品专门设计的UI界面,本实施例包括但不限于此。
需要说明的是,针对绘画练习中的书画图形与有效笔迹信息的评分及显示也可以应用图3a和图3b所示的过程,在此不再赘述。
例如,控制单元150还被配置为对评分进行分析以对用户书写的每一笔画的书写情况进行分类。例如,控制单元150可以对用户的有效笔迹信息进行笔画的归类和统计,并通过数据分析出,用户哪些笔画写(画)的比较好,哪些笔画写(画)的不好,该分析结果可通过显示部显示出来。另外,该分析结果也可以供用户在外部UI中调用和读取。
需要说明的是,本实施例提供的书画装置可以位于头戴部上,也可以位于书桌或者椅子上等,本实施例对此不作限制。
例如,图4为本实施例提供的书画装置的主要工作流程示意图,该书画装置的主要工作流程在上述内容中已进行了详细描述,这里不再赘述。采用本实施例提供的书画装置应用增强现实技术,一方面可以使用户对映射在书画区域中的预设书画信息的虚像进行临摹,在反复练习的过程中,使用户获得客观的评价和指导意见,提高练习体验和效果,快速提高用户的书法和绘画水平;另一方面可以在书法练习过程中给出下一笔的建议,从而避免书写笔画顺序不对的情况。
本公开另一实施例提供一种书画设备,图5示出了本实施例提供的书画设 备示意图。如图5所示,书画设备包括头戴部110以及实施例一提供的任一种书画装置,该书画装置位于头戴部110上。
需要说明的是,图5示意性的以头戴部110为眼镜为例进行描述,但本实施例不限于此,例如,头戴部还可以为头盔等戴在用户头部的设备。图5以书画装置包括单独的识别模块或者识别电路140为例进行描述。
例如,在用户佩戴头戴部110时,显示部120和图像采集部130位于头戴部110的前方。例如,在识别模块或者识别电路140是书画装置中单独的部分时,本实施例对识别模块或者识别电路140和控制单元150的位置不作限制,例如,识别模块或者识别电路140和控制单元150可以位于头戴部110的正前方、侧面,或者位于头戴部110的内部等位置。
例如,识别模块或者识别电路还可以为控制单元或者图像采集部的一部分。
本实施例提供的书画设备应用增强现实技术,一方面可以使用户对映射在书画区域中的预设书画信息的虚像进行临摹,在反复练习的过程中,使用户获得客观的评价和指导意见,提高练习体验和效果,快速提高用户的书法和绘画水平;另一方面可以在书法练习过程中给出下一笔的建议,从而避免书写笔画顺序不对的情况。
本公开另一实施例提供一种书画辅助方法,图6a-图6d为本实施例提供的书画辅助方法流程图,例如,如图6a所示,具体步骤包括:
S201:采集用户面前的图像。
S202:根据图像识别书画区域。
S203:将预设书画信息的图像光传输到用户的眼中并将预设书画信息映射在书画区域中。
例如,将预设书画信息的虚像映射在书画区域中。
例如,可通过图像采集部采集用户面前的图像。
例如,本实施例的一示例中,控制单元与图像采集部通信连接,可通过图像采集部包括的识别模块或者识别电路根据图像识别书画区域。
例如,本实施例的一示例中,控制单元与图像采集部通信连接,可通过控制单元包括的识别模块或者识别电路根据图像识别书画区域。
例如,本实施例的一示例中,可通过单独的识别模块或者识别电路根据图像识别书画区域,本示例中的识别模块或者识别电路分别与图像采集部以及控 制单元通信连接。
上述的识别模块指用软件算法实现识别功能,以便由各种类型的处理器执行。例如,考虑到现有硬件工艺的水平,识别模块可以为以软件算法实现的模块。
上述的识别电路指用硬件实现识别功能,即在不考虑成本的情况下,本领域技术人员都可以搭建对应的硬件电路来实现识别功能。例如,该硬件电路包括常规的超大规模集成(VLSI)电路或者门阵列以及诸如逻辑芯片、晶体管之类的现有半导体或者是其它分立的元件。例如,识别电路还可以用可编程硬件设备,诸如现场可编程门阵列、可编程阵列逻辑、可编程逻辑设备等实现,本实施例对此不作限制。
例如,预设书画信息包括书画图形,即,书法和绘画的图形。
例如,可通过显示部在用户面前显示预设书画信息。
例如,可将控制单元与显示部通信连接,控制单元被配置为控制显示部显示预设书画信息,显示部显示的图像光被传输到用户的眼中且预设书画信息的虚像映射在书画区域中。
例如,通过显示部既可以将书画图形反射到用户的眼中,用户也可以透过显示部观看到书画区域,因此,用户可以看到书画图形的虚像映射在书画区域内。
本实施例提供的书画辅助方法利用采集图像,并识别出该图像中的书画区域,然后将预设书画信息的图像光传输到用户的眼中并将预设书画信息的虚像映射在书画区域中,可以使用户在书画区域看到预设书画信息的虚像,因此,实现了预设书画信息与现实场景的增强现实。该书画辅助方法应用增强现实技术(Augmented Reality,AR),可以使用户对映射在书画区域中的预设书画信息的虚像进行临摹,在反复练习的过程中,快速提高用户的书法和绘画水平。
例如,如图6b所示,本实施例提供的书画辅助方法还包括:
S211:根据图像识别书画区域内的书画子区域。
例如,还可以通过上述识别模块或者识别电路根据图像识别书画区域的书画子区域。本实施例不限于此,例如,还可以通过另一个识别模块或者识别电路根据图像识别书画子区域。
S212:将预设书画信息的子图形映射在书画子区域内。
例如,将预设书画信息的子图形的虚像映射在书画子区域内。
例如,本实施例以书画子区域为边框图形为例进行描述,但不限于此。例如,书画子区域还可以以其他图形示出,例如,包括阴影方格或空白方格等图形,只要能够被识别模块或者识别电路识别即可。例如,当识别模块或者识别电路识别到具有四个直角的边框时,判定当前区域为书画子区域(边框图形),当识别到至少重复出现N(根据实际需求设定,例如,N>2)个书画子区域时,判定当前区域为包含书画子区域的书画区域。
例如,在识别模块或者识别电路识别书画区域内的书画子区域后,识别模块或者识别电路将数据信号发送给控制单元,控制单元将书画图形的子图形的虚像映射在书画子区域内,用户对书画图形的子图形的虚像进行临摹以练习书法。
例如,如图6c所示,本实施例提供的书画辅助方法还包括:
S221:根据图像识别用户书写的第一个字符。
S222:将预设书画信息映射在第一个字符周围且在书画区域内的预设位置处。
例如,将预设书画信息的虚像映射在第一个字符周围且在书画区域内的预设位置处。
例如,本示例以用户进行书法练习为例进行描述,当书画区域上没有书画子区域(例如,字框等图形)时,识别模块或者识别电路还被配置为根据图像采集部提供的图像识别用户书写的第一个字符。当识别模块或者识别电路识别到用户书写的第一个字符后,将该数据信号传输给控制单元,控制单元对该第一个字符的周围设定预设位置,然后控制单元将书画图形的虚像映射在第一个字符周围的预设位置处。用户可根据实际需要对预设位置的个数及位置进行设定。
例如,上述方法对绘画的练习也同样能起到辅助作用,这里不再赘述。
例如,如图6d所示,本实施例提供的书画辅助方法还包括:
S231:采集用户面前的笔尖的深度图像。
S232:根据深度图像识别笔尖在运动过程中的三维坐标的变化以提取有效笔迹信息。
需要说明的是,本实施例以有效笔迹信息为笔尖与书画区域内的书写载体的距离为0时的运动轨迹,且该运动轨迹落入书画区域中的预定区域,例如,书画子区域。上述将有效笔迹信息的运动轨迹限制到书画子区域内以排除误写 笔画。排除误写笔画是为了更好地判断位于书画子区域(例如,字框)内的书写情况。然而,书画子区域之外的误写笔画也可以看作是一种书画的缺陷,因此,这里判断有效笔迹信息的预定区域也并不限制于书画子区域,而也可以是整个书画区域,或者书画区域的任意一部分。
例如,本实施例提供的书画辅助方法还包括根据有效笔迹信息,强调显示用户下一笔的笔画。例如,可以将控制单元设置为:在提取识别模块或者识别电路识别的笔尖运动轨迹的有效笔迹信息的过程中,对当前书写的字中已完成的笔画弱化或不显示,对下一笔将书写的笔画进行强调显示(例如,增亮、加粗、闪动等)以提示用户下一笔的笔画,从而实现纠正下笔顺序,避免用户出现倒下笔的情况。
例如,本实施例提供的书画辅助方法还包括根据预设书画信息对应的书画图形和有效笔迹信息的契合度进行判断并评分。
例如,本示例以书画图形为书法字帖为例进行描述,控制单元在读取书画图形和有效笔迹信息之后,可以对有效笔迹信息中的每个笔迹与书法字帖中的每个字进行比较,并按照两者之间的契合度进行评分。“契合度”即为每个笔迹与书法字帖中的每个字的重合程度。例如,当契合度为80%-85%时,可以设定用户所得成绩为80分;当契合度为85%-90%时,可以设定用户所得成绩为85分;当契合度为90%-95%时,可以设定用户所得成绩为90分;当契合度为95%-100%时,可以设定用户所得成绩为100分等,本实施例对此不作限制,用户可根据自身情况对契合度与评分的关系进行设定。
例如,还可以通过拆分有效笔迹信息中的每个笔迹的笔画,按照每个笔画的重合度进行打分,然后取平均分数。例如,笔画的打分包括:控制单元按照常见字的笔画,将笔画分成几大类,如:横、竖、撇、捺、横钩、横撇等,当用户完成一个字的书写后,控制单元记录每个笔画的得分,并记录每一种笔画最近N次(如100或1000次)的历史得分情况,得出历史得分曲线,并进行笔画间的对比,从而帮助用户得到自己的练字情况。
例如,本实施例提供的书画辅助方法还包括对评分进行投影显示或输出。
例如,控制单元可以将评分的显示信号发送给显示部以显示评分结果。例如,该评分可按照用户习惯,设置成针对每个字进行显示,或针对一行字的平均分进行显示,或针对一页字的平均分进行显示等,该评分显示通过显示部呈现在用户眼前。需要说明的是,无论哪种评分显示,处理单元都会对每个字进 行对比打分并记录。
例如,控制单元还可以将评分的信号发送到外部接口。例如,当用户完成对一页书画图形的临摹时,该页所有有效笔迹信息已记录完成。当用户通过外部接口(例如,通用串口总线(USB)或蓝牙等)与外部用户界面(UI)通信连接,读取评分和记录信息时,整页字帖将在用户界面显示,并与书画图形进行比较(如左右分屏比较或重合比较等),使用户能客观了解自己的练字水平。
需要说明的是,针对绘画练习中的书画图形与有效笔迹信息的评分及显示也可以应用上述的过程,在此不再赘述。
例如,本实施例提供的书画辅助方法还包括对评分进行分析以对用户书写的每一笔画的书写情况进行分类。例如,控制单元可以对用户的有效笔迹信息进行笔画的归类和统计,并通过数据分析出,用户哪些笔画写(画)的比较好,哪些笔画写(画)的不好,该分析结果可通过显示部显示出来。另外,该分析结果也可以供用户在外部UI中调用和读取。
采用本实施例提供的书画辅助方法,通过应用增强现实技术,一方面可以使用户对映射在书画区域中的预设书画信息的虚像进行临摹,在反复练习的过程中,使用户获得客观的评价和指导意见,快速提高练习者的书法和绘画水平;另一方面可以在书法练习过程中给出下一笔的建议,从而避免书写笔画顺序不对的情况。
根据本公开的一些实施例还提供一种书画装置,该书画装置包括:处理器;存储器;和存储在所述存储器中的计算机程序指令,在所述计算机程序指令被所述处理器运行时执行上述书画辅助方法所包含的各个步骤。
有以下几点需要说明:
(1)除非另作定义,本公开实施例以及附图中,同一标号代表同一含义。
(2)本公开实施例附图中,只涉及到与本公开实施例涉及到的结构,其他结构可参考通常设计。
(3)为了清晰起见,在用于描述本公开的实施例的附图中,层或区域被放大。可以理解,当诸如层、膜、区域或基板之类的元件被称作位于另一元件“上”或“下”时,该元件可以“直接”位于另一元件“上”或“下”,或者可以存在中间元件。
以上所述,仅为本公开的具体实施方式,但本公开的保护范围并不局限于 此,任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应以所述权利要求的保护范围为准。

Claims (38)

  1. 一种书画装置,包括:
    显示部,被配置为显示预设书画信息;
    图像采集部,被配置为采集用户面前的图像;
    控制单元,与所述显示部通信连接,所述控制单元被配置为控制所述显示部显示所述预设书画信息,
    其中,所述图像经处理得到书画区域,所述显示部显示的图像光被传输到所述用户的眼中且所述预设书画信息被映射在所述书画区域中。
  2. 根据权利要求1所述的书画装置,其中,所述控制单元与所述图像采集部通信连接,所述控制单元包括识别模块,所述识别模块被配置为处理所述图像采集部采集的图像以得到所述书画区域。
  3. 根据权利要求2所述的书画装置,其中,所述识别模块还被配置为根据所述图像识别所述书画区域内的书画子区域,所述控制单元控制所述显示部以使所述预设书画信息的子图形映射在所述书画子区域内。
  4. 根据权利要求2或3所述的书画装置,其中,所述识别模块还被配置为根据所述图像识别所述用户书写的第一个字符,所述控制单元控制所述显示部以使所述预设书画信息映射在所述第一个字符周围且在所述书画区域内的预设位置处。
  5. 根据权利要求2-4任一项所述的书画装置,其中,所述图像采集部还被配置为采集所述用户所使用的笔尖的深度图像,所述识别模块还被配置为根据所述深度图像识别所述笔尖在运动过程中的三维坐标的变化以使所述控制单元提取有效笔迹信息,所述有效笔迹信息为所述笔尖与所述书画区域内的书写载体的距离为0时的运动轨迹,且所述运动轨迹在所述书画区域中的预定区域内。
  6. 根据权利要求1所述的书画装置,其中,所述控制单元与所述图像采集部通信连接,所述控制单元包括识别电路,所述识别电路被配置为处理所述图像采集部采集的图像以得到所述书画区域。
  7. 根据权利要求6所述的书画装置,其中,所述识别电路还被配置为根据所述图像识别所述书画区域内的书画子区域,所述控制单元控制所述显示部以使所述预设书画信息的子图形映射在所述书画子区域内。
  8. 根据权利要求6或7所述的书画装置,其中,所述识别电路还被配置为根据所述图像识别所述用户书写的第一个字符,所述控制单元控制所述显示部以使所述预设书画信息映射在所述第一个字符周围且在所述书画区域内的预设位置处。
  9. 根据权利要求6-8任一项所述的书画装置,其中,所述图像采集部还被配置为采集所述用户所使用的笔尖的深度图像,所述识别电路还被配置为根据所述深度图像识别所述笔尖在运动过程中的三维坐标的变化以使所述控制单元提取有效笔迹信息,所述有效笔迹信息为所述笔尖与所述书画区域内的书写载体的距离为0时的运动轨迹,且所述运动轨迹在所述书画区域中的预定区域内。
  10. 根据权利要求1所述的书画装置,其中,所述控制单元与所述图像采集部通信连接,所述图像采集部包括识别模块,所述识别模块被配置为处理所述图像以得到所述书画区域。
  11. 根据权利要求10所述的书画装置,其中,所述识别模块还被配置为根据所述图像识别所述书画区域内的书画子区域,所述控制单元还被配置为控制所述显示部以使将所述预设书画信息的子图形映射在所述书画子区域内。
  12. 根据权利要求10或11所述的书画装置,其中,所述识别模块还被配置为根据所述图像识别所述用户书写的第一个字符,所述控制单元还被配置为控制所述显示部以使所述预设书画信息映射在所述第一个字符周围且在所述书画区域内的预设位置处。
  13. 根据权利要求10-12任一项所述的书画装置,其中,所述图像采集部还被配置为采集所述用户所使用的笔尖的深度图像,所述识别模块根据所述深度图像识别所述笔尖在运动过程中的三维坐标的变化并发送给所述控制单元以提取有效笔迹信息,所述有效笔迹信息为所述笔尖与所述书画区域内的书写载体的距离为0时的运动轨迹,且所述运动轨迹在所述书画区域中的预定区域内。
  14. 根据权利要求1所述的书画装置,其中,所述控制单元与所述图像采集部通信连接,所述图像采集部包括识别电路,所述识别电路被配置为处理所述图像以得到所述书画区域。
  15. 根据权利要求14所述的书画装置,其中,所述识别电路还被配置为根据所述图像识别所述书画区域内的书画子区域,所述控制单元还被配置为控 制所述显示部以使所述预设书画信息的子图形映射在所述书画子区域内。
  16. 根据权利要求14或15所述的书画装置,其中,所述识别电路还被配置为根据所述图像识别所述用户书写的第一个字符,所述控制单元还被配置为控制所述显示部以使所述预设书画信息映射在所述第一个字符周围且在所述书画区域内的预设位置处。
  17. 根据权利要求14-16任一项所述的书画装置,其中,所述图像采集部还被配置为采集所述用户所使用的笔尖的深度图像,所述识别电路根据所述深度图像识别所述笔尖在运动过程中的三维坐标的变化并发送给所述控制单元以提取有效笔迹信息,所述有效笔迹信息为所述笔尖与所述书画区域内的书写载体的距离为0时的运动轨迹,且所述运动轨迹在所述书画区域中的预定区域内。
  18. 根据权利要求1所述的书画装置,还包括识别模块,所述识别模块分别与所述图像采集部以及所述控制单元通信连接,所述识别模块被配置为处理所述图像采集部采集的图像以得到所述书画区域。
  19. 根据权利要求18所述的书画装置,其中,所述识别模块还被配置为根据所述图像识别所述书画区域内的书画子区域,所述控制单元还被配置为控制所述显示部以使所述预设书画信息的子图形映射在所述书画子区域内。
  20. 根据权利要求18或19所述的书画装置,其中,所述识别模块还被配置为根据所述图像识别所述用户书写的第一个字符,所述控制单元还被配置为控制所述显示部以使所述预设书画信息映射在所述第一个字符周围且在所述书画区域内的预设位置处。
  21. 根据权利要求18-20任一项所述的书画装置,其中,所述图像采集部还被配置为采集所述用户所使用的笔尖的深度图像,所述识别模块还被配置为根据所述深度图像识别所述笔尖在运动过程中的三维坐标的变化并发送给所述控制单元以提取有效笔迹信息,所述有效笔迹信息为所述笔尖与所述书画区域内的书写载体的距离为0时的运动轨迹,且所述运动轨迹在所述书画区域中的预定区域内。
  22. 根据权利要求1所述的书画装置,还包括识别电路,所述识别电路分别与所述图像采集部以及所述控制单元通信连接,所述识别电路被配置为处理所述图像采集部采集的图像以得到所述书画区域。
  23. 根据权利要求22所述的书画装置,其中,所述识别电路还被配置为 根据所述图像识别所述书画区域内的书画子区域,所述控制单元还被配置为控制所述显示部以使所述预设书画信息的子图形映射在所述书画子区域内。
  24. 根据权利要求22或23所述的书画装置,其中,所述识别电路还被配置为根据所述图像识别所述用户书写的第一个字符,所述控制单元还被配置为控制所述显示部以使所述预设书画信息映射在所述第一个字符周围且在所述书画区域内的预设位置处。
  25. 根据权利要求22-24任一项所述的书画装置,其中,所述图像采集部还被配置为采集所述用户所使用的笔尖的深度图像,所述识别电路还被配置为根据所述深度图像识别所述笔尖在运动过程中的三维坐标的变化并发送给所述控制单元以提取有效笔迹信息,所述有效笔迹信息为所述笔尖与所述书画区域内的书写载体的距离为0时的运动轨迹,且所述运动轨迹在所述书画区域中的预定区域内。
  26. 根据权利要求5、9、13、17、21或25中的任一项所述的书画装置,其中,所述控制单元还被配置为根据所述有效笔迹信息,强调显示所述用户下一笔的笔画。
  27. 根据权利要求26所述的书画装置,还包括:
    外部接口,与所述控制单元通信连接,所述控制单元还被配置为根据所述预设书画信息与所述有效笔迹信息的契合度进行判断并评分,并将所述评分的信号发送到所述外部接口。
  28. 根据权利要求26或27所述的书画装置,还包括:存储器,所述预设书画信息和所述有效笔迹信息至少之一存储在所述存储器中。
  29. 根据权利要求1-28任一项所述的书画装置,其中,所述显示部包括:
    投影部和半透半反部,所述投影部将在所述显示部显示的所述预设书画信息投影到所述半透半反部,且所述半透半反部将所述投影部投射的所述预设书画信息的图像光反射到所述用户的眼中。
  30. 一种书画设备,包括:
    头戴部;以及
    书画装置,位于所述头戴部上,
    其中,所述书画装置包括根据权利要求1-29中任一项所述的书画装置。
  31. 一种书画辅助方法,包括:
    采集用户面前的图像;
    根据所述图像识别书画区域;
    将预设书画信息的图像光传输到所述用户的眼中并将所述预设书画信息映射在所述书画区域中。
  32. 根据权利要求31所述的书画辅助方法,还包括:
    根据所述图像识别所述书画区域内的书画子区域;
    将所述预设书画信息的子图形映射在所述书画子区域内。
  33. 根据权利要求31或32所述的书画辅助方法,还包括:
    根据所述图像识别所述用户书写的第一个字符;
    将所述预设书画信息映射在所述第一个字符周围且在所述书画区域内的预设位置处。
  34. 根据权利要求31-33任一项所述的书画辅助方法,还包括:
    采集所述用户面前的笔尖的深度图像;
    根据所述深度图像识别所述笔尖在运动过程中的三维坐标的变化以提取有效笔迹信息,
    其中,所述有效笔迹信息为所述笔尖与所述书画区域内的书写载体的距离为0时的运动轨迹,且所述运动轨迹在所述书画区域中的预定区域内。
  35. 根据权利要求34所述的书画辅助方法,还包括:
    根据所述有效笔迹信息,强调显示所述用户下一笔的笔画。
  36. 根据权利要求34或35所述的书画辅助方法,还包括:
    根据所述预设书画信息和所述有效笔迹信息的契合度进行判断并评分。
  37. 根据权利要求36所述的书画辅助方法,其中,对所述评分进行投影显示或输出。
  38. 根据权利要求36或37所述的书画辅助方法,其中,分析所述评分以对所述用户书写的每一笔画的书写情况进行分类。
PCT/CN2017/114769 2017-05-05 2017-12-06 书画装置、书画设备及书画辅助方法 WO2018201716A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/088,624 US11107254B2 (en) 2017-05-05 2017-12-06 Calligraphy-painting device, calligraphy-painting apparatus, and auxiliary method for calligraphy painting

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710313042.4 2017-05-05
CN201710313042.4A CN108804989B (zh) 2017-05-05 2017-05-05 书画装置、书画设备及书画辅助方法

Publications (1)

Publication Number Publication Date
WO2018201716A1 true WO2018201716A1 (zh) 2018-11-08

Family

ID=64016893

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/114769 WO2018201716A1 (zh) 2017-05-05 2017-12-06 书画装置、书画设备及书画辅助方法

Country Status (3)

Country Link
US (1) US11107254B2 (zh)
CN (1) CN108804989B (zh)
WO (1) WO2018201716A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948713A (zh) * 2019-03-22 2019-06-28 邓斌 一种书画作品的鉴证方法

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3908962A1 (en) * 2019-01-11 2021-11-17 Institut Mines Telecom Method for generating information about the production of a handwritten, hand-affixed or printed trace
CN112037593A (zh) * 2019-06-03 2020-12-04 广东小天才科技有限公司 一种基于增强现实的学习交互实现方法及系统
CN110458145B (zh) * 2019-08-22 2022-12-27 司法鉴定科学研究院 一种基于二维动态特征的离线笔迹个体识别系统及方法
CN110796065A (zh) * 2019-10-26 2020-02-14 深圳市锦上科技有限公司 基于图像识别的练字评分方法、系统以及计算机可读介质
CN111312012B (zh) * 2020-02-27 2022-05-06 广东工业大学 一种书法练习指引方法及装置
CN111347813A (zh) * 2020-03-26 2020-06-30 杭州艺旗网络科技有限公司 一种ar雕塑方法
CN112258928A (zh) * 2020-10-18 2021-01-22 孙瑞峰 一种练字方法和装置
CN113326009B (zh) * 2021-03-05 2022-05-31 临沂大学 一种纸质书法作品的复制方法和装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842144A (zh) * 2012-08-13 2012-12-26 胡宵 一种光书画轨迹数据的取得装置及方法
CN104052977A (zh) * 2014-06-12 2014-09-17 海信集团有限公司 一种交互式图像投影方法和装置
CN106371593A (zh) * 2016-08-31 2017-02-01 李姣昂 一种投影交互式书法练习系统及其实现方法
CN106373455A (zh) * 2016-09-21 2017-02-01 陈新德 一种微投临摹显示装置及显示方法

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014115095A2 (en) * 2013-01-28 2014-07-31 Ecole Polytechnique Federale De Lausanne (Epfl) Transflective holographic film for head worn display
CN103559008B (zh) * 2013-10-24 2016-06-29 深圳市掌网立体时代视讯技术有限公司 一种数字书画笔迹的显示方法及装置
CN103617642B (zh) * 2013-11-22 2017-03-15 深圳市掌网科技股份有限公司 一种数字书画方法及装置
CN103941866B (zh) * 2014-04-08 2017-02-15 河海大学常州校区 一种基于Kinect深度图像的三维手势识别方法
CN105279141B (zh) * 2015-10-27 2018-10-26 武汉改图网技术有限公司 一种基于模糊匹配算法的印刷品仿制设计方法和系统
CN205176802U (zh) 2015-11-04 2016-04-20 陈慧聪 一种智能投影笔
CN205167969U (zh) * 2015-11-04 2016-04-20 泉州市佳能机械制造有限公司 一种智能投影笔的笔端投影装置
CN105488544A (zh) 2015-12-01 2016-04-13 广东小天才科技有限公司 一种描红临摹笔迹识别的方法及系统
CN106355973B (zh) * 2016-10-28 2019-03-15 厦门优莱柏网络科技有限公司 一种绘画辅导方法及装置
US20190155895A1 (en) * 2017-11-20 2019-05-23 Google Llc Electronic text pen systems and methods

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842144A (zh) * 2012-08-13 2012-12-26 胡宵 一种光书画轨迹数据的取得装置及方法
CN104052977A (zh) * 2014-06-12 2014-09-17 海信集团有限公司 一种交互式图像投影方法和装置
CN106371593A (zh) * 2016-08-31 2017-02-01 李姣昂 一种投影交互式书法练习系统及其实现方法
CN106373455A (zh) * 2016-09-21 2017-02-01 陈新德 一种微投临摹显示装置及显示方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948713A (zh) * 2019-03-22 2019-06-28 邓斌 一种书画作品的鉴证方法

Also Published As

Publication number Publication date
US11107254B2 (en) 2021-08-31
CN108804989B (zh) 2021-11-30
CN108804989A (zh) 2018-11-13
US20210012540A1 (en) 2021-01-14

Similar Documents

Publication Publication Date Title
WO2018201716A1 (zh) 书画装置、书画设备及书画辅助方法
US10186057B2 (en) Data input device, data input method, and non-transitory computer readable recording medium storing data input program
JP5887775B2 (ja) ヒューマンコンピュータインタラクションシステム、手と手指示点位置決め方法、及び手指のジェスチャ決定方法
US10698475B2 (en) Virtual reality interaction method, apparatus and system
WO2020082566A1 (zh) 基于生物识别的远程教学方法、装置、设备及存储介质
US20150123966A1 (en) Interactive augmented virtual reality and perceptual computing platform
WO2015000286A1 (zh) 基于增强现实的三维互动学习系统及方法
CN102096471B (zh) 一种基于机器视觉的人机交互方法
US20170156589A1 (en) Method of identification based on smart glasses
CN108027656B (zh) 输入设备、输入方法和程序
CN103632169A (zh) 一种文字书写自动纠错方法和设备
US10248652B1 (en) Visual writing aid tool for a mobile writing device
US10853651B2 (en) Virtual reality interaction method, apparatus and system
CN111047947A (zh) 一种基于ar技术的书写指导器及书写指导方法
JP2019061590A (ja) 情報処理装置、情報処理システム及びプログラム
Stearns et al. The design and preliminary evaluation of a finger-mounted camera and feedback system to enable reading of printed text for the blind
CN104714650B (zh) 一种信息输入方法和装置
WO2019098872A1 (ru) Способ отображения трехмерного лица объекта и устройство для него
CN102609734A (zh) 一种机器视觉的手写识别方法和系统
US20150138088A1 (en) Apparatus and Method for Recognizing Spatial Gesture
JP2016045723A (ja) 電子機器
JP6492545B2 (ja) 情報処理装置、情報処理システム及びプログラム
US11676357B2 (en) Modification of projected structured light based on identified points within captured image
JP2016045724A (ja) 電子機器
JP2016031721A (ja) 検索装置、方法及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17908316

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16/03/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17908316

Country of ref document: EP

Kind code of ref document: A1