WO2016122699A1 - Mise à jour atomique à protection contre les défaillances de fichiers de données d'application - Google Patents

Mise à jour atomique à protection contre les défaillances de fichiers de données d'application Download PDF

Info

Publication number
WO2016122699A1
WO2016122699A1 PCT/US2015/027195 US2015027195W WO2016122699A1 WO 2016122699 A1 WO2016122699 A1 WO 2016122699A1 US 2015027195 W US2015027195 W US 2015027195W WO 2016122699 A1 WO2016122699 A1 WO 2016122699A1
Authority
WO
WIPO (PCT)
Prior art keywords
file
opened
data blocks
files
application
Prior art date
Application number
PCT/US2015/027195
Other languages
English (en)
Inventor
Anton Ajay MENDEZ
Rajat VERMA
Sandya Srivilliputtur Mannarswamy
Terence P. Kelly
James Hyungsun PARK
Original Assignee
Hewlett Packard Enterprise Development Lp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Enterprise Development Lp filed Critical Hewlett Packard Enterprise Development Lp
Publication of WO2016122699A1 publication Critical patent/WO2016122699A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/23Updating
    • G06F16/2308Concurrency control
    • G06F16/2336Pessimistic concurrency control approaches, e.g. locking or multiple versions without time stamps

Definitions

  • FIG. 1 illustrates a block diagram of an example system for a mechanism for failure atomic update of application data in application data files in a file system
  • FIG 2 illustrates a block diagram of another example system for mechanism for failure atomic update of application data in application data files in a file system
  • FIG. 3 illustrates a block diagram illustrating an example implementation of a mechanism for failure atomic updates of application data in application data files in a file system, such as those shown in FIGS . 1 and 2 ;
  • FIG 4 illustrates a flow chart of an example method for failure atomic update of application data in a application data files in a file system:
  • FIG. 5 illustrates a block diagram of an example computing device for a mechanism for applications for failure atomic update of application data in application data files in a file system.
  • Examples described herein provide enhanced methods, techniques, and systems for a mechanism for applications to perform failure atomic update of application data in application data files in a file system.
  • failure atomic updates (consistent modification of application durable data, i.e., the problem of evolving durable application data without fear that failure will preclude recovery to a consistent state) protect integrity of application data from failures, such as process, crashes, kernel panics and/or power outages.
  • file systems strive to protect internal metadata from corruption; however, file systems may not offer corresponding protection for application data, providing neither transactions on application data nor other unified solution to the failure atomic updates problem. Instead, file systems may offer primitives for controlling the order in which application data attains durability; applications may shoulder the burden of restoring consistency to their data following failures.
  • POSIX post operating system for Unix
  • Some existing mechanisms may provide imperfect support for solving failure atomic updates problem. Further, existing file systems may offer limited support for failure atomic updates, may be due to problems associated with operation system (OS) interfaces. For example, POSFX may permit write to succeed partially, making it difficult to define atomic semantics for this call. Further for example, synchronization calls, such as fsync and msync may constrain the order in which application data reaches durable media. However, applications generally remain responsible for reconstructing a consistent state of their data following a crash. Sometimes, applications may circumvent the need for recovery by using the one failure-atomic mechanism provided in conventional file systems, i.e., the file rename.
  • OS operation system
  • desktop applications can open a temporary file, write the entire modified contents of a file to it, then use the rename to implement an atomic file update - a reasonable expedient for small files but may be untenable for large files.
  • some existing mechanisms may require special hardware and may apply only to single-file updates, and may not address modifications to memory-mapped files.
  • transaction size i.e., size of atomically modified data in a file may be limited by the size of the journal, which may carry substantial overheads.
  • a journal based implementations of failure-atomic sync operation may suffer at least two shortcomings, one being a need to run a modified kernel that may impede adoption, and the other being use of the file system journal that can limit transaction sizes.
  • a simple interface to file system may offer applications a guarantee that the application data in files always reflects the most recent successful sync mechanism, such as syncv operation, on the files.
  • syncv operation takes as arguments an array of file descriptors.
  • the file system guarantees that all passed cached blocks in the list of files are flushed to a stable storage. If all files passed to syncv operation were opened with the atomic flag, then syncv operation guarantees that all the files are atomically flushed to a stable storage.
  • the interface to the file system offers the syncv mechanism along with transaction records in transaction log file that faihire-atomically commits changes to files. Furthermore, failure-injection test verifies that the file system protects the integrity of application data from crashes.
  • the interface to the file system runs on conventional hardware and operating system and the mechanism is implementable in any file system that supports per-file writable snapshots.
  • the example implementations describe a simple interlace to the file system that generalizes failure-atomic variants of write and sync operations. If files are opened with atomic flags, the state of their application data always reflects the most recent successful sync operation, such as syncv. Further, the size of atomic updates to the files is only limited by the tree space in the file system and not by the file system journal. Furthermore, opening each of the files including an atomic flag guarantees that the file's application data reflects the most recent synchronization operation regardless of whether the file was modified with interlaces, such as write and/or mmap families of interfaces. Atomic flag may be implemented in a file system that supports per-file writable snapshots.
  • syncv operation along with the using the transaction records in a transaction log file described in the present disclosure ensures that the updates to files are atomic in nature.
  • the file system may not rely solely on the file system journal to implement atomic updates, and the size of atomic updates may be limited only by the amount of free space in the file system.
  • Adding the interface to the file system may be relatively easy as it can run on any conventional operating system kernels and requires no special hardware.
  • syncv operation may be implemented in the existing kernels using a device input and output control (IOCTL) interface.
  • IOCTL device input and output control
  • the example implementations describe a file system that supports multi-file atomic durability via a syncv mechanism.
  • the example syncv mechanism attains failure atomicity by leveraging the write-ahead logging feature in the journal mechanism of the file system. Further, the example syncv mechanism attains failure atomicity either by updating all modifications made to data blocks of open files or not making any of the updates to a system storage disc.
  • any modifications to the metadata are written to a transaction log file before the changes are written to a storage disk and further the content of the transaction log file is written to storage disk at regular intervals.
  • file system may read the transaction log file to confirm the file system transactions. All completed transactions may be committed to the storage disk and uncompleted transactions may be undone. In such a scenario, it can be seen that the number of uncommitted records i the transaction log file and not the amount of data in the file system may decide the speed of recovery from a crash.
  • FIG 1 illustrates a block diagram of an example system 100 for a mechanism for applications for failure atomic update of application data in application data files in a file system 106.
  • the system 100 may represent any type of computing device capable of reading machine-executable instructions. Examples of computing device may include, without limitation, a server, a desktop computer, a notebook computer, a tablet computer, a thin client, a mobile device, a personal digital assistant (PDA), a tablet, and the like.
  • PDA personal digital assistant
  • the system 100 may include a processor 102 and storage device 104 coupled to the processor 102.
  • the storage device 104 may be a machine readable storage medium (e.g., a disk drive).
  • the machine-readable storage medium may also be an external medium that may be accessible to the system 100.
  • the storage device 104 may include the file system 106.
  • the file system 106 may include a failure atomic update module 108.
  • the failure atomic update module 108 may refer to software components (machine executable instructions), a hardware component or a combination thereof.
  • the failure atomic update module 108 may include, by way of example, components, such as software components, processes, tasks, co-routines, functions, attributes, procedures, drivers, firmware, data, databases, data structures and Application Specific Integrated Circuits (ASIC).
  • the failure atomic update module 108 may reside in a volatile or non- volatile storage medium and configured to interact with a processor 102 of the system 100.
  • the file system 106 may include data blocks, application data files, snapshots of files, directory and/or file clones implemented by atomic updates as shown in FIG. 3.
  • file clones may include shared data blocks of a file (i.e., primary file) in the file system that are implemented by atomic updates.
  • the file system may decouple logical file hierarchy from the physical storage.
  • the logical file hierarchy layer may implement the naming scheme and portable operating system interface (POSIX) complaint functions, such as, creating, opening, reading, and writing files.
  • POSIX portable operating system interface
  • the physical storage layer implements write-ahead logging, caching, file storage allocation, file migration, and/or physical disk input/output (I O) functions. This is explained in more detail with reference to FIG. 3.
  • files including associated atomic flags may be opened upon invoking open operations by an application.
  • each opened file may include data blocks: Block 0, Block I, and Block 2 as shown at 302 in FIG3.
  • the atomic flag may indicate the application's desire that changes to the application data in each file may be atomic.
  • a file clone including shared data blocks of the file may then be created by the application upon opening each file including the atomic flag.
  • File clone may be a writable snapshot of the file at the time it is opened with using the atomic flag.
  • the file clone may not change with any modification to the data blocks in each file.
  • the file clones may not be visible to the user visible namespace and may exist in a non-visible (hidden) namespace that may be accessible to the operating system (OS).
  • OS operating system
  • each of file clones CLONE 0 iNODE 1 to CLONE 0 iNODE N may be implemented utilizing a variant of eopy-on-write (COW) operation as shown at 304 in FIG. 3.
  • COW eopy-on-write
  • a copy of the file's iNODE may be made as sho wn in FIG. 3.
  • Each of iNODE 1 to iNODE N may include each associated file's block map, a data structure that maps logical file offsets to block numbers on the underlying block device as show in FIG 3.
  • each of original files FILE iNODE 1 to FILE iNODE N and its associated file clone CLONE 0 iNODE to CLONE 0 iNODE N respectively, may have identical copies of the block map, and they may initially share the same storage.
  • Any modified data blocks in each opened file may be remapped by the file system upon a subsequent modificatio and/or addition to the file by the application.
  • modified data blocks in each file may be remapped using COW operation and leaving the file clone's view of the file unchanged.
  • addition of Block 3 and remapping of added Block 3 via COW is shown at 306 in FIG. 3. It can be seen that the file clone CLONE 0 iNODE still points to the blocks: Block 0, Block I and Block 2 of the file at the time it was opened.
  • a operation for synching may then be initiated by the application, which in turn passes file descriptors of each opened file associated with any modified data blocks. Any modified data blocks i each file associated with any of the file descriptors passed via the syncv operation are then flushed into a stable storage media, such as a disk drive.
  • the created t le clone of each opened file associated with the file descriptors passed via the syncv operation may then be deleted using transaction records in a transaction log file residing in a journal sub-system, which facilitates in doing a single transaction deletes of all files passed to syncv to make the application data files failure atomic.
  • using transaction records in a transaction log such that the deletes of each passed file to syticv appears as a single file system transaction, thereby making the application data failure atomic.
  • a new file clone including any modified and unmodified data blocks of each file associated with the file descriptor passed via the operation may then be created.
  • the state of each file may reflect a logical state of the file at the time the applicatio synched using the sync operation.
  • syncv operation is sync vector that is similar in operation to fsync operation, msync operation and/or fdatasync operation and further capable of operating substantially simultaneously on multiple files.
  • sync operation replacing created file clone CLONE 0 iNODE with new file clone CLONE 1 iNODE is shown at 308 in FIG. 3.
  • the last close of a file opened with atomic flag and all cached blocks of the file are flushed and any existing file clones are deleted.
  • the above mechanism repeats itself until the file is closed by the application.
  • the failure atomic update module 108 determines if there was an untimely system failure. Based on the outcome of the determination, if the untimely system failure occurs before deleting the file clones, the failure atomic update module 108 then replaces the files with file clones next time the files are opened by the application. Based on the outcome of the determination, if there was no untimely system failure and the file clones are deleted, the failure atomic update module 108 then creates the new file clones including any modified and unmodified data blocks.
  • an intermediary approach may include a background daemon to search the file system for recoverable files after mount but before files are opened.
  • the failure atomic update module 108 determines if there was an untimely failure during delete operation. Based on the outcome of the determination, if there was an untimely failure during the delete operation on multiple files that results in incomplete failure atomic update, the failure atomic update module 1 8 then replaces files with file clones the next time the files are opened.
  • the system fails, recovery of files may be delayed until the files are accessed again.
  • the file system's path name lookup function may check if each of the file's clones exists in the hidden namespace. The file clones are then renamed to the user visible file and a handle to the file clones may be returned if the file clones exist in the hidden namespace.
  • the per-file recovery offers several attractions, for example, consider an OS kernel panic that occurs while many processes are updating many files. Upon reboot, the file system may recover quickly because the in-progress updates, interrupted by the crash trigger no recover ⁇ '' actions when the file system is mounted.
  • the applications that may not need recover)' from interrupted atomic updates may not share the recovery-time penalty incurred by the crash; only those applications that benefit from application-consistent recover)' may pay the penalty.
  • interrupted atomic updates e.g., applications that are merely reading files
  • the above described atomic failure update mechanism is built on top of the file clone feature of file system, it can be envisioned that alternative implementations, such as using delayed journal writeback may also be possible implementations.
  • FIG. 4 illustrates a flow chart of an example method 400 for failure atomic update of application data in application data files in a file system.
  • the method 400 which is described below, may be executed on a system such as a system 100 of FIG. 1 or a system 200 of FIG. 2. However, other systems may be used as well.
  • files including data blocks in each file and an associated atomic flag are opened upon invoking open operations by an application.
  • the atomic flag may indicate the application's desire that any changes to the file be atomic.
  • a file clone is created upon opening each file including the atomic flag by the application.
  • the file clone may be a writable snapshot of the file at the time it is opened using the atomic flag.
  • a file clone including shared blocks of the primary file is created upon opening the file including the atomic flag by the application. The primary file and the file clone may share same blocks until one or more blocks in the primary file are modified.
  • any modified data blocks of each opened file are remapped upon a subsequent modification and/or addition to the file by the application.
  • any modified data blocks of each file are remapped via copy of write (COW) operation and leaving the file clone's view of each file unchanged by the file system upon a subsequent modification and/or addition to each file by the application.
  • COW copy of write
  • a syncv operation to sync may be initiated by the application to pass file descriptors of each opened file associated with any modified data blocks.
  • syncv operation is a sync vector operation that is similar" to an fsync operation, a mysnc operation and/or a fdatasync operation and further capable of operating substantially simultaneously on multiple files.
  • file descriptors associated with a subset of any opened files associated with modified data blocks are passed upon initiating a syncv operation.
  • any modified data blocks in each opened file associated with file descriptor passed via the syncv operation are flushed to a stable storage media, such as a storage disk.
  • the created file clone of each opened file associated with the file descriptor sent via the syncv operation is then deleted using transaction records in a transaction log file residing in a journal sub-system by the file system.
  • any modified data blocks in each opened file associated with the file descriptor sent via the syncv operation is flushed into a stable storage media such that the state of the file reflects a logical state of each file at the time the application syncs using the syncv operation, and then each created file clone is deleted by the file system.
  • a new file clone is created including any modified and unmodified data blocks for an opened file associated with a file descriptor sent via the operation.
  • a determination is made as to whether the application has closed all of the files. Based on the outcome of the determination at block 414, the process 400 goes to block 404 and repeats the steps outlined in blocks 404 to 414 if any of the opened files are still open and not closed by the application. Further, based on the outcome of the determination at block 414, the process 400 goes to block 416 and stops if all the opened files are closed by the application.
  • the failure atomic update module 108 determines whether there was an untimely system failure. If the untimely system failure occurs before the deleting each file cone, the file is then replaced with the file clone the next time each file is opened by the application. Based on the outcome of the determination, if there was no untimely system failure and ail the file clones are deleted, new file clones are created including any modified and unmodified data blocks.
  • FIG. 5 illustrates a block diagram of an example computing device 500 for a mechanism for failure atomic update of application data in single application data file in a file system.
  • the computing device 500 includes a processor 502 and a machine- readable storage medium 504 communicatively coupled through a system bus.
  • the processor 502 may be any type of central processing unit (CPU), microprocessor, or processing logic that interprets and executes machine-readable instructions stored in tlie machine-readable storage medium 504.
  • the machine-readable storage medium 504 may be a random access memory (RAM) or another type of dynamic storage device that may store information and machine-readable instructions that may be executed by the processor 502.
  • the machine-readable storage medium 504 may be synchronous DRAM (SDRAM), double data rate (DDR), rambus DRAM (RDRAM), rambus RAM, etc., or storage memory media such as a floppy disk., a hard disk, a CD-ROM, a DVD, a pen drive, and the like.
  • the machine- readable storage medium 504 may be a non-transitory machine-readable medium.
  • the machine -readable storage medium 504 may be remote but accessible to the computing device 500.
  • the machine-readable storage medium 504 may store instructions 402, 404, 406, 408, 410, 412, 414 and 416.
  • instructions 402, 404, 406, 408, 410, 412, 414 and 416 may be executed by processor 502 to provide a mechanism for failure atomic update of application data in single application data file in a file system.
  • Instructions 402, 404, 406, 408, 410, 412, 414 and 416 may be executed by processor 502 to implement failure atomic updates of application data.
  • Instructions 402 , 404, 406, 408, 410, 412, 414 and 416 may be executed by processor 502 to protect integrity of application data from failures, such as process crashes, OS kernel panics, and/or power outages.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Retry When Errors Occur (AREA)

Abstract

Dans un exemple, l'invention concerne des techniques pour mettre à jour des fichiers de données d'application entre un mode ouvert et de synchronisation et/ou entre deux synchronisations consécutives à l'aide d'une opération syncv, des descripteurs de fichier associés à chaque fichier de données d'application modifié, et des enregistrements de transactions associés dans un fichier de journal de transactions.
PCT/US2015/027195 2015-01-30 2015-04-23 Mise à jour atomique à protection contre les défaillances de fichiers de données d'application WO2016122699A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN477/CHE/2015 2015-01-30
IN477CH2015 2015-01-30

Publications (1)

Publication Number Publication Date
WO2016122699A1 true WO2016122699A1 (fr) 2016-08-04

Family

ID=56544105

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/027195 WO2016122699A1 (fr) 2015-01-30 2015-04-23 Mise à jour atomique à protection contre les défaillances de fichiers de données d'application

Country Status (1)

Country Link
WO (1) WO2016122699A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108664597A (zh) * 2018-05-08 2018-10-16 深圳市创梦天地科技有限公司 一种移动操作系统上的数据缓存装置、方法及存储介质
CN111552489A (zh) * 2020-03-31 2020-08-18 支付宝(杭州)信息技术有限公司 用户态文件系统热升级方法、装置、服务器及介质
US10877992B2 (en) 2017-11-30 2020-12-29 International Business Machines Corporation Updating a database

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020078244A1 (en) * 2000-12-18 2002-06-20 Howard John H. Object-based storage device with improved reliability and fast crash recovery
US20100082547A1 (en) * 2008-09-22 2010-04-01 Riverbed Technology, Inc. Log Structured Content Addressable Deduplicating Storage
US20120330894A1 (en) * 2011-06-24 2012-12-27 Netapp, Inc. System and method for providing a unified storage system that supports file/object duality
WO2013112634A1 (fr) * 2012-01-23 2013-08-01 The Regents Of The University Of California Système et procédé d'implémentation de transactions à l'aide d'un utilitaire d'assistance de dispositif de mémoire permettant des mises à jour atomiques et d'une interface flexible permettant de gérer une journalisation de données
WO2014062191A1 (fr) * 2012-10-19 2014-04-24 Hewlett-Packard Development Company, L.P. Instantanés cohérents asynchrones dans des mémoires persistantes

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020078244A1 (en) * 2000-12-18 2002-06-20 Howard John H. Object-based storage device with improved reliability and fast crash recovery
US20100082547A1 (en) * 2008-09-22 2010-04-01 Riverbed Technology, Inc. Log Structured Content Addressable Deduplicating Storage
US20120330894A1 (en) * 2011-06-24 2012-12-27 Netapp, Inc. System and method for providing a unified storage system that supports file/object duality
WO2013112634A1 (fr) * 2012-01-23 2013-08-01 The Regents Of The University Of California Système et procédé d'implémentation de transactions à l'aide d'un utilitaire d'assistance de dispositif de mémoire permettant des mises à jour atomiques et d'une interface flexible permettant de gérer une journalisation de données
WO2014062191A1 (fr) * 2012-10-19 2014-04-24 Hewlett-Packard Development Company, L.P. Instantanés cohérents asynchrones dans des mémoires persistantes

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10877992B2 (en) 2017-11-30 2020-12-29 International Business Machines Corporation Updating a database
CN108664597A (zh) * 2018-05-08 2018-10-16 深圳市创梦天地科技有限公司 一种移动操作系统上的数据缓存装置、方法及存储介质
CN111552489A (zh) * 2020-03-31 2020-08-18 支付宝(杭州)信息技术有限公司 用户态文件系统热升级方法、装置、服务器及介质
CN111552489B (zh) * 2020-03-31 2022-04-26 支付宝(杭州)信息技术有限公司 用户态文件系统热升级方法、装置、服务器及介质

Similar Documents

Publication Publication Date Title
US10936441B2 (en) Write-ahead style logging in a persistent memory device
Min et al. Lightweight {Application-Level} Crash Consistency on Transactional Flash Storage
US9747287B1 (en) Method and system for managing metadata for a virtualization environment
US8046334B2 (en) Dual access to concurrent data in a database management system
JP6046260B2 (ja) MapReduceシステムのためのテーブル・フォーマット
US8768890B2 (en) Delaying database writes for database consistency
EP2590086B1 (fr) Base de données en colonne utilisant des objets de données de fichiers virtuels
JP4583087B2 (ja) トランザクションの整合性を保つ書き込み時コピーのデータベース
US20220237143A1 (en) Single-Sided Distributed Storage System
US8626713B2 (en) Multiple contexts in a redirect on write file system
US9009125B2 (en) Creating and maintaining order of a log stream
US20180300083A1 (en) Write-ahead logging through a plurality of logging buffers using nvm
Hu et al. TxFS: Leveraging file-system crash consistency to provide ACID transactions
JP2011086241A (ja) データベースの複製を生成する装置及び方法
US11003555B2 (en) Tracking and recovering a disk allocation state
US8990159B2 (en) Systems and methods for durable database operations in a memory-mapped environment
US10127114B2 (en) Method of file system design and failure recovery with non-volatile memory
KR20160002109A (ko) 순서 모드 저널링 파일 시스템을 위한 블록 그룹 단위 저널링 방법 및 장치
Son et al. SSD-assisted backup and recovery for database systems
US10185630B2 (en) Failure recovery in shared storage operations
US11068181B2 (en) Generating and storing monotonically-increasing generation identifiers
WO2016122699A1 (fr) Mise à jour atomique à protection contre les défaillances de fichiers de données d'application
WO2016120884A1 (fr) Mise à jour atomique de défaillance d'un fichier de données d'application unique
Pillai et al. Crash Consistency: Rethinking the Fundamental Abstractions of the File System
US10896168B2 (en) Application-defined object logging through a file system journal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15880588

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15880588

Country of ref document: EP

Kind code of ref document: A1