JPH0766832A - Multimedia electronic mail system - Google Patents

Multimedia electronic mail system

Info

Publication number
JPH0766832A
JPH0766832A JP5232218A JP23221893A JPH0766832A JP H0766832 A JPH0766832 A JP H0766832A JP 5232218 A JP5232218 A JP 5232218A JP 23221893 A JP23221893 A JP 23221893A JP H0766832 A JPH0766832 A JP H0766832A
Authority
JP
Japan
Prior art keywords
sender
electronic mail
text
image
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
JP5232218A
Other languages
Japanese (ja)
Inventor
Kiichi Matsuda
Eiji Morimatsu
Akira Nakagawa
章 中川
喜一 松田
映史 森松
Original Assignee
Fujitsu Ltd
富士通株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd, 富士通株式会社 filed Critical Fujitsu Ltd
Priority to JP5232218A priority Critical patent/JPH0766832A/en
Publication of JPH0766832A publication Critical patent/JPH0766832A/en
Withdrawn legal-status Critical Current

Links

Abstract

PURPOSE:To display a sent electronic mail onto a monitor while adding a face picture and a synthesis voice signal to the mail. CONSTITUTION:The system is provided with a sender side 1 generating at least a sender ID and a text and a receiver side 2 receiving the ID and the text sent from the sender side 1 in a form of an electronic mail. Furthermore, the receiver side 2 has a monitor 25 and a memory 20 used to register and store a sender picture of the sender side 1 as a file, and also a control circuit 24 which detects the ID sent from the sender side 1 outputs selectively a sender picture from the memory 20 thereby and controls to display the picture onto the monitor 25.

Description

Detailed Description of the Invention

[0001]

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a multimedia electronic mail system, and more particularly to a multimedia electronic mail system capable of displaying a face image and a synthesized voice when displaying a transmitted electronic mail on a monitor. Regarding mail method.

[0002]

2. Description of the Related Art In recent years, an electronic mail service has been performed in which computers are connected to each other via a network and documents are exchanged. In this service, an electronic document created on the sending side is sent to a computer to which the document is to be transmitted via a network, and the receiving side displays / prints the document to send / receive information.

[0003]

However, in the conventional electronic mail system, the receiving side can only read the document created by the transmitting side as it is. For this reason, there has been a problem that emails so far have been dull.

Further, there has been a problem that the conventional electronic mail has not always been sufficient to cope with the technological trend of multimedia. Therefore, it is an object of the present invention to provide a new electronic mail system that solves the above conventional problems.

[0005]

A multimedia electronic mail system according to the present invention receives at least a sender that generates an ID and text of a sender and an ID and text sent from the sender by electronic mail. Has a receiver.

The receiving side further has a monitor and a memory for registering and storing the sender image of the transmitting side as a file.

A control circuit is provided which detects the ID sent from the sending side, and selectively outputs the sender image from the memory according to the detected ID and displays the sender image on the monitor.

Further, the transmitting side has a subsystem for generating a sender image. The sender image generated by this subsystem is sent to the receiving side by e-mail in advance and registered and stored in the memory on the receiving side.

The control circuit has a voice synthesizing section for converting a text sentence sent from the transmitting side into a synthetic voice and outputting the synthetic voice. Then, the synthesized voice from the voice synthesis unit is displayed on the monitor together with the sender image.

The control circuit further displays the text sentence on the monitor in synchronization with the synthesized voice and the sender image.

[0011]

In the present invention, the sender image is stored and registered in the memory on the receiving side. Further, the control circuit has a voice synthesizer. The ID is sent from the sender to the receiver together with the text by e-mail.

Therefore, the receiving side detects the ID and selectively outputs the sender image stored and registered in the memory. This selectively output sender image is displayed on the monitor.
At the same time, the voice synthesizer generates a synthesized voice corresponding to the text sentence and displays it together with the sender image.

Further, a text sentence can be displayed together with the sender image and the synthesized voice. Therefore, it is possible to easily recognize the sender as compared with the conventional electronic mail that only displays and prints a sentence.

[0014]

1 is a conceptual block diagram for explaining an embodiment of the present invention. Reference numeral 1 is a sending side (Mr. A), 2 is a receiving side computer system, and 3 is a transmission line.

The transmitting side 1 has a counterpart receiving side 2 which will be described later.
Is provided with a subsystem 4 for generating composite image data to be transmitted and registered as a sender image. Reference numeral 5 is a device for creating an electronic mail text to be sent to the receiving side 2.

On the other hand, the composite image data from the subsystem 4 and the electronic mail text from the text creating device 5 are switched by the switch 6 and input to the receiving side 2.

The receiving side 2 receives in advance the composite image data from the other transmitting side 1 (for example, Mr. B including Mr. A, Mr. C, etc.) who exchanges electronic mail, and registers it as data files 21, 22, 23. It has a memory 20 for storage.

In the data files 21, 22, and 23, the composite image data transmitted from the transmitting side 1 in advance, the ID of the transmitting side 1 as shown in FIG. 2, the image data file name, and other parameters Information is registered and stored.

Here, the other parameter information is, for example, a parameter for specifying the voice quality used when synthesizing the voice of the partner transmission side 1.

The receiving side 2 further includes a control circuit 24 having an ID detecting function. The control circuit 24 receives the e-mail text and ID sent from the sender 1 and detects the ID of the sender 1 on the other end. Based on the detected ID, one of the corresponding composite image data 21, 22, 23 on the transmitting side 1 is selected from the memory 20.

A monitor 25 displays the composite image data selected from the memory 20 and the electronic mail text received by the control circuit 24.

FIG. 3 shows an example of the subsystem 4 provided in the receiving side 1 for generating the composite image data. This subsystem 4 is, for example, previously announced by the present inventor,
The first paper, "Fabrication of Face Image Synthesis Subsystem for PC Text-to-Video Conversion System" (IEICE Technical Report of IEICL. IE92-76, 1992-117)
Page-12) and the second paper "Addition of eye movement in text moving image conversion system for personal computer" (IEICE Technical Report of IEICE, IE92-131, PRU92-15)
4, 1993-03, pages 81 to 88).

In the figure, 31 is a video data memory of an original image obtained by imaging the upper half of a person. 32
Is a memory that stores a mouth shape model created by mapping triangular patches as shown in FIG. 2 on page 8 of the first paper.

Reference numeral 33 is an arithmetic circuit that transforms the mouth shape model stored in the memory 32 into a limited mouth-shaped face image based on the video data of the original image stored in the memory 31. . Specifically, in the arithmetic circuit 33, for each triangular patch that constitutes the mouth shape model of the memory 32, the position coordinates of the three vertices of the patch before deformation (for example, the closed mouth) and the position coordinates of the triangular patch after deformation. Relationship is required.

Then, image data corresponding to points inside the deformed triangular patch is generated by performing texture mapping calculation from the image data in the unmodified triangular patch. The mouth shape limited here is, for example, a mouth shape pattern corresponding to a total of seven vowels in vocalization and consonants and plosives. Further, a pattern code is assigned to each of these patterns.

The synthetic image data corresponding to each of these seven mouth-shaped pattern codes is expanded and stored in the memory 34.

Further, in FIG. 3, reference numeral 35 is a memory for accumulating eye shape model data created by using mapping similarly to the mouth shape model. This eye shape model is specifically created by the method described on page 85 of the second paper.

For example, it is composed of two-dimensional triangular patches obtained by dividing the area around the eyes, and is composed of 12 vertices and 14 triangular patches.

Reference numeral 36 is an arithmetic circuit which transforms the eye shape model stored in the memory 35 into a limited eye image based on the video data 31 of the original image. The limited eye image obtained by this calculation is generated by performing texture mapping in the same manner as the above-described mouth shape model, as described on page 86 of the second paper.

For each triangular patch forming the generated eye blink image, the relationship between the positions of the three vertices of the patch before deformation (for example, the closed mouth) and the position of the triangular patch after deformation can be obtained.

As an example, a modified and limited eye blink image is, for example, 11 kinds of eye blink images (corresponding to a parameter that changes in a step of 0.1 between 0 and 1 as an example). The image of the process of changing from a state where the eyes are completely open to a state where the eyes are completely closed). These 11
The eye-blinking images of the seeds are respectively stored with the pattern codes as parameters in the memory 37.

The face image and the blinking image of the eyes generated as described above are combined and stored in the memory 20 of the receiving side 2 in advance through the transmission path 3 as described above. still,
In addition to being sent through the transmission path 3, the composite image may be passed to the receiving side in the form of FD and stored and registered in the memory 20 on the receiving side.

FIG. 4 is a detailed configuration example of the control circuit on the receiving side 2 shown in FIG. 5 and 6 are operation flows (No. 1) and (No. 2) on the receiving side 2 in FIG. 4, the same or similar parts as those in FIG. 1 are designated by the same reference numerals.

The operation of the receiving side 2 in FIG. 4 will be described below with reference to the operation flow (FIGS. 5 and 6). Control circuit 24
In the figure, 241 is a receiving circuit for an electronic mail sent from the sender 1. When receiving the e-mail, the receiving circuit 241 downloads the e-mail to a work memory area (not shown) in the receiving circuit 241 (step S1).

Next, the ID of the sender 1 included in the electronic mail is detected (step S2) and the electronic mail text is sent to the text analysis unit 242.

In the example of FIG. 4, the detected ID is Mr. A's ID: GDC01203. Therefore, the receiving circuit 241 accesses the memory 20 and selects and outputs the registration data corresponding to Mr. A's ID from the registration file table (see FIG. 2) (step S3).

Then, image data parameters including the display parameter 248 and the voice quality parameter 249 are set (step S4). That is, the display parameter is set in the display image selection unit 244, and the voice quality parameter is set in the voice synthesis unit 243.

The image data parameter is sent from the transmitting side 1 in advance and is registered in the registration file table as an initial value together with the ID as shown in FIG. Alternatively, the image data parameter may be sent to the receiving side 2 every time the electronic mail is sent.

Further, among the image data parameters, the display parameter 248 can control display at any image magnification or display position. The voice quality parameter 249 changes the quality of the voice uttered by the synthetic voice.

In particular, since the combination of the synthesized face image and the synthesized voice gives a great impression to the user, it is preferable to be able to set these parameters in each synthesized image file.

Next, from the receiving circuit 241 to the text analysis unit 2
The email text sent to 42 is parsed here (step S5) and converted into a mouth-shaped code sequence that utters the text according to the rules. That is, the text analysis unit 242 converts the text into the seven mouth-shaped pattern codes described with reference to FIG.

At the same time, the code sequence for the parameter designating the change in one blink of the eye or the parameter designating the number of blinks in the designated time is output. Also in this case, the code sequence that specifies the blink image of the eyes is output by the parameters described with reference to FIG.

As a specific example, by inserting a parameter for identifying the blink of the eye with a specific symbol in a specific place in the text of the e-mail text, a composite image of the sender with the blink of the eye corresponding to the content of the text. It is also possible to display.

On the other hand, the text analysis unit 242 branches the electronic mail text and sends it to the voice synthesis unit 243. The voice synthesizer 243 is a voice synthesizer F manufactured by Fujitsu as an example.
MVS-101 is used. The voice synthesis unit 243 sequentially converts the input electronic mail text into voice and outputs the voice (step S6).

Further, as described above, the voice synthesis unit 243.
Is a voice quality parameter 2 as a parameter for vocal control.
49 is input. Therefore, in the process of converting the e-mail text into speech sequentially, the voice quality parameter 24
It is converted into voice with a voice quality based on 9.

The voice synthesizing unit 243 can incorporate voice quality into the voice synthesizer FMVS-101 manufactured by Fujitsu to synthesize voice, and can specify the voice quality based on the voice quality parameter 249.

On the other hand, the mouth-shaped code sequence and the blinking code sequence from the text analysis unit 242 are displayed on the display image selection unit 24.
4 is input. Further, the display parameter 248 described above is input to the display image selection unit 244.

Here, as described above, the receiving circuit 24
1 accesses the memory 20 and selects and outputs registration data corresponding to Mr. A's ID from the registration file table (see FIG. 2) (step S3). At this time, the receiving circuit 24
Based on the ID detected by 1, the corresponding composite image is read from the memory 20 and developed in the card format on the memory 245.

Therefore, the display image selection unit 244 selects and outputs the composite image of the mouth shape and blink developed in the card format based on the code sequence through the switching selection circuit 246. This output is output by the addition circuit 247 to the voice synthesis unit 2
It is added to the voice synthesis output from 43 and sent to the monitor 25.

As a result, the monitor 25 displays a synthetic image in which the mouth shape and the blink of the eyes change, and the voice synthesis output is output as a voice.

The switching selection circuit 246 is used for the memory 2
An address selection circuit for 45, which selects an image expanded in a card format using a code sequence as an address.

If the ID of the sent electronic mail is not registered in the table of the receiving circuit 241, the composite image cannot be selectively output from the memory 20. In this case, a default face image file is used.

In such a case, if the default face image is an actual human face, the sentence is read aloud with a face different from the sender, which is unnatural. In order to eliminate this unnaturalness, it is appropriate to use an illustration representing a human face that does not actually exist in the default face image, for example, a human face that is not a photograph.

Returning to the operation flow of FIG. 5, if the text is finished after the selection of the display image, the process is finished here (step S8).

FIG. 6 is an operation flow for explaining the operation when displaying the contents of the text together with the face image on the monitor 25. As shown in FIG. 7, a display surface 250 of the monitor 25
It is possible to display the contents of the text 252 together with the face image 251.

In such a case, in the operation flow of FIG.
On the display surface 250 of the monitor 25, a text 252 and a face image 251 that changes only the blink of the eyes are displayed (step S
21). Then, when there is a key input from the keyboard (not shown) (step S22), the key input signal is also sent to the voice synthesizer 24 through the interface circuit (not shown).
3 is activated, and the voice synthesis output is controlled so as to be guided to the adding circuit 247.

Therefore, the text 2 is displayed on the monitor 25.
Display of 52 and face image 25 reading out text sentences
1 is displayed, and at the same time, synthetic speech corresponding to the text sentence is output (step S23).

When the synthesized voice output is finished, the text 2 is again displayed.
The display of the face image 251 with blinking eyes 52 is returned (step S24). If there is further key input (step S2
5), it is determined whether the text 252 is finished (step S26).

If there is the content of the subsequent text, it is newly displayed and the synthesized voice corresponding to the text sentence is output again (step S23). If the text is complete, the display ends there.

[0060]

As described above according to the embodiments,
According to the present invention, it is possible to produce the fun of the e-mail reader for an e-mail that simply displays and prints a text.

Further, when the electronic mail is displayed, the sender's face is automatically selected and displayed so that the sender can be easily recognized.

[Brief description of drawings]

FIG. 1 is a conceptual block diagram illustrating an embodiment of the present invention.

FIG. 2 is a diagram showing an example of a registration data file.

FIG. 3 is a diagram illustrating a subsystem that generates combined image data.

FIG. 4 is a block diagram showing a configuration example of a receiving side of the present invention.

FIG. 5 is an operation flow (No. 1) on the receiving side according to the present invention.

FIG. 6 is an operation flow (No. 2) on the receiving side according to the present invention.

FIG. 7 is a diagram showing a display example of a receiving side monitor according to the present invention.

Claims (15)

[Claims]
1. A sender (1) for generating at least a sender's ID and text, and the sender (1) by electronic mail.
Has a receiver (2) for receiving the ID and text sent by the receiver, and the receiver (2) further has a monitor (25)
And a memory (20) for registering and storing the sender image of the sending side (1) as a file, detecting the ID sent from the sending side (1), and thereby the memory (20) A control circuit (2) for controlling the display so that the sender image is selectively output from the monitor and displayed on the monitor (25).
4) A multimedia electronic mail system characterized by having 4).
2. The sender side (1) according to claim 1, having a subsystem (4) for generating a sender image, and sending the sender image to a receiver side (2) by email beforehand. A multimedia electronic mail system characterized by being configured so as to be registered and stored in the memory (20) of the receiving side (2).
3. The control circuit (24) according to claim 1, further comprising a voice synthesizing section (243) for converting a text sentence sent from the transmitting side (1) into synthetic voice and outputting the synthesized voice. A multimedia electronic mail system characterized in that a synthesized voice from a voice synthesizer (243) is displayed on the monitor (25) together with the sender image.
4. The control circuit (24) according to claim 3, further comprising the text of the text,
In synchronization with the synthetic voice and the sender image, the monitor (2
5) A multimedia electronic mail system characterized by being configured to display in 5).
5. The ID according to claim 1, wherein the ID sent from the transmitting side (1) is detected, and a sender image corresponding to the detected ID is stored in the memory (2).
A multimedia electronic mail system characterized in that when it is not registered and stored in 0), the image data prepared as a default is displayed on the monitor (25).
6. The multimedia electronic mail system according to claim 5, wherein the image data prepared as the default is a human figure with an illustration that expresses a face of a person who is not a photograph and does not actually exist.
7. The multimedia electronic mail system according to claim 1, wherein the sender image is a plurality of face images each having a different mouth-shaped pattern.
8. The multimedia electronic mail system according to claim 7, wherein the plurality of face images have a mouth-shaped pattern for each vowel, consonant, and plosive.
9. The multimedia electronic mail system according to claim 7, wherein the different mouth-shaped patterns are created by changing the mapping of the triangular patch onto the face image.
10. The multimedia electronic mail system according to claim 7, wherein the plurality of face images are further characterized by eye blink patterns.
11. The multimedia electronic mail system according to claim 10, wherein the blink pattern of the eyes is configured to be selected corresponding to a specific symbol in the text.
12. The control circuit (24) according to claim 4, wherein the control circuit (24) displays a text of an electronic mail and a face image of only a certain blink on the monitor (25), and any key input is performed. At one time, the voice synthesizer (2
43. A multimedia electronic mail system characterized by being configured to start displaying a synthesized voice from 43) on the monitor (25).
13. The control circuit (24) according to claim 4 or 10, wherein after the reading of the text by the synthesized voice is finished, the control circuit (24) displays the text and a blinking face image, and performs a predetermined key input. , A multimedia e-mail system characterized by being configured to control so as to repeat reading again.
14. The control circuit (24) according to claim 4, wherein the control circuit (24) is configured to control at least the header portion of the electronic mail text so as to display the synthesized voice on the monitor (25). Characteristic multimedia e-mail method.
15. The sub-system (4) according to claim 2, further comprising analyzing the voice of the sender, generating a parameter corresponding to the voice quality of the sender, and the receiver (2) together with the image of the sender. ), And stores and stores it in the memory (20) of the receiving side (2), and the control circuit (24) converts the text sentence sent from the transmitting side (1) into synthetic speech and outputs it. A voice synthesizing unit (243), wherein the voice quality of the synthetic voice output from the voice synthesizing unit (243) is characterized by the parameter and displayed on the monitor (25) together with the sender image. A multimedia e-mail method characterized by being constructed.
JP5232218A 1993-08-26 1993-08-26 Multimedia electronic mail system Withdrawn JPH0766832A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP5232218A JPH0766832A (en) 1993-08-26 1993-08-26 Multimedia electronic mail system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP5232218A JPH0766832A (en) 1993-08-26 1993-08-26 Multimedia electronic mail system

Publications (1)

Publication Number Publication Date
JPH0766832A true JPH0766832A (en) 1995-03-10

Family

ID=16935839

Family Applications (1)

Application Number Title Priority Date Filing Date
JP5232218A Withdrawn JPH0766832A (en) 1993-08-26 1993-08-26 Multimedia electronic mail system

Country Status (1)

Country Link
JP (1) JPH0766832A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6311195B1 (en) 1996-12-20 2001-10-30 Sony Corporation Method and apparatus for sending E-mail, method and apparatus for receiving E-mail, sending/receiving method and apparatus for E-mail, sending program supplying medium, receiving program supplying medium and sending/receiving program supplying medium
GB2382293A (en) * 2001-10-18 2003-05-21 Hewlett Packard Co System and method for displaying graphics
US6760751B1 (en) 1996-12-20 2004-07-06 Sony Corporation Method and apparatus for automatic sending of E-mail and automatic sending control program supplying medium
WO2005052804A1 (en) * 2003-11-27 2005-06-09 Sanyo Electric Co., Ltd. Mobile communication device
JP2011507092A (en) * 2007-12-13 2011-03-03 サムスン エレクトロニクス カンパニー リミテッド Multimedia e-mail composition apparatus and method

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6311195B1 (en) 1996-12-20 2001-10-30 Sony Corporation Method and apparatus for sending E-mail, method and apparatus for receiving E-mail, sending/receiving method and apparatus for E-mail, sending program supplying medium, receiving program supplying medium and sending/receiving program supplying medium
US6760751B1 (en) 1996-12-20 2004-07-06 Sony Corporation Method and apparatus for automatic sending of E-mail and automatic sending control program supplying medium
US7434168B2 (en) 1996-12-20 2008-10-07 Sony Corporation Method and apparatus for sending E-mail, method and apparatus for receiving E-mail, sending/receiving method and apparatus for E-mail, sending program supplying medium, receiving program supplying medium and sending/receiving program supplying medium
US7178095B2 (en) 1996-12-20 2007-02-13 So-Net Entertainment Corporation Method and apparatus for sending E-mail, method and apparatus for receiving E-mail, sending/receiving method and apparatus for E-mail, sending program supplying medium, receiving program supplying medium and sending/receiving program supplying medium
GB2382293A (en) * 2001-10-18 2003-05-21 Hewlett Packard Co System and method for displaying graphics
GB2382293B (en) * 2001-10-18 2005-09-28 Hewlett Packard Co System and method for displaying graphics
US7385610B2 (en) 2001-10-18 2008-06-10 Hewlett-Packard Development Company, L.P. System and method for displaying graphics
WO2005052804A1 (en) * 2003-11-27 2005-06-09 Sanyo Electric Co., Ltd. Mobile communication device
JP2013176154A (en) * 2003-11-27 2013-09-05 Kyocera Corp Portable communication device
US9276891B2 (en) 2003-11-27 2016-03-01 Kyocera Corporation Mobile communication device
US9418463B2 (en) 2003-11-27 2016-08-16 Kyocera Corporation Mobile communication device
US10237219B2 (en) 2003-11-27 2019-03-19 Kyocera Corporation Mobile communication device
JP2011507092A (en) * 2007-12-13 2011-03-03 サムスン エレクトロニクス カンパニー リミテッド Multimedia e-mail composition apparatus and method

Similar Documents

Publication Publication Date Title
US9070365B2 (en) Training and applying prosody models
US9626000B2 (en) Image resizing for optical character recognition in portable reading machine
US7961200B2 (en) Image cropping system and method
US20160344860A1 (en) Document and image processing
US5875428A (en) Reading system displaying scanned images with dual highlights
US20140339296A1 (en) Barcode, barcode device, system, and method
US20160086354A1 (en) Method and apparatus for encoding/decoding image data
US20200152168A1 (en) Document Mode Processing For Portable Reading Machine Enabling Document Navigation
JP4738180B2 (en) Image processing apparatus and electronic file generation method
CA2365937C (en) System and method for distributing multilingual documents
US8719029B2 (en) File format, server, viewer device for digital comic, digital comic generation device
US5644690A (en) Electronic montage creation device
US5861821A (en) Keyboard-type input apparatus
US6397183B1 (en) Document reading system, read control method, and recording medium
US8626512B2 (en) Cooperative processing for portable reading machine
EP0586259B1 (en) Sign-language learning system and method
US8150107B2 (en) Gesture processing with low resolution images with high resolution processing for optical character recognition for a reading machine
US8284999B2 (en) Text stitching from multiple images
US20030051210A1 (en) Device-independent apparatus and method for rendering graphical data
US20150043822A1 (en) Machine And Method To Assist User In Selecting Clothing
US8531494B2 (en) Reducing processing latency in optical character recognition for portable reading machine
US7180527B2 (en) Text display terminal device and server
EP2264995B1 (en) Image processing apparatus, image processing method, and computer program
US7505056B2 (en) Mode processing in portable reading machine
US20090096796A1 (en) Animating Speech Of An Avatar Representing A Participant In A Mobile Communication

Legal Events

Date Code Title Description
A300 Withdrawal of application because of no request for examination

Free format text: JAPANESE INTERMEDIATE CODE: A300

Effective date: 20001031