File not found.



Create a collection of name/value pairs.


Parses records read by the directory archive reader.


Reading part of DirectoryArchive.


Writing part of DirectoryArchive.


Offers the ability to read/write a directory and its entries to a


Offers the same interface as the DirectoryArchive, additionally

Module Contents

directory.archive.UnknownFieldType: typing_extensions.TypeAlias = 'Literal[_Sentinel.UNKNOWN_FIELD]'[source]
class directory.archive._Sentinel(*args, **kwds)[source]

Bases: enum.Enum

Create a collection of name/value pairs.

Example enumeration:

>>> class Color(Enum):
...     RED = 1
...     BLUE = 2
...     GREEN = 3

Access them by:

  • attribute access:

>>> Color.RED
<Color.RED: 1>
  • value lookup:

>>> Color(1)
<Color.RED: 1>
  • name lookup:

>>> Color['RED']
<Color.RED: 1>

Enumerations can be iterated over, and know how many members they have:

>>> len(Color)
>>> list(Color)
[<Color.RED: 1>, <Color.BLUE: 2>, <Color.GREEN: 3>]

Methods can be added to enumerations, and members can have their own attributes – see the documentation for details.

exception directory.archive.DirectoryFileNotFound(file_id: str, entry_name: str, filename: str)[source]

Bases: FileNotFoundError

File not found.

class directory.archive.FieldParser(directory:, archive_path: pathlib.Path)[source]

Parses records read by the directory archive reader.

get_field(key: str) ParsedField | None[source]

CSV Files header parsing is inconsistent with the the internal id ( of the field. The headers are lovercased, so that the first will not yield the field, the second will also not success because characters like ( are not replaced by underscores.

parse_fileinput(key: str, value: str, field: onegov.form.parser.core.FileinputField) onegov.core.utils.Bunch | None[source]
parse_multiplefileinput(key: str, value: str, field: onegov.form.parser.core.MultipleFileinputField) tuple[onegov.core.utils.Bunch, Ellipsis][source]
parse_generic(key: str, value: str, field: onegov.form.parser.core.ParsedField) object[source]
parse_item(key: str, value: str) tuple[str, Any | None] | UnknownFieldType[source]
parse(record: SupportsItems[str, str]) dict[str, Any | None][source]
class directory.archive.DirectoryArchiveReader[source]

Reading part of DirectoryArchive.

path: pathlib.Path[source]
read(target: | None = None, skip_existing: bool = True, limit: int = 0, apply_metadata: bool = True, after_import: Callable[[DirectoryEntry], Any] | None = None)[source]

Reads the archive resulting in a dictionary and entries.

  • target – Uses the given directory as a target for the read. Otherwise, a new directory is created in memory (default).

  • skip_existing – Excludes already existing entries from being added to the directory. Only applies if target is not None.

  • limit – Limits the number of records which are imported. If the limit is reached, the read process silently ignores all extra items.

  • apply_metadata – True if the metadata found in the archive should be applied to the directory.

  • after_import – Called with the newly added entry, right after it has been added.

apply_metadata(directory:, metadata: dict[str, Any])[source]

Applies the metadata to the given directory and returns it.

read_metadata() dict[str, Any][source]

Returns the metadata as a dictionary.

read_data() Sequence[dict[str, Any]][source]

Returns the entries as a sequence of dictionaries.

read_data_from_json() list[dict[str, Any]][source]
read_data_from_csv() tuple[dict[str, Any], Ellipsis][source]
read_data_from_xlsx() tuple[dict[str, Any], Ellipsis][source]
class directory.archive.DirectoryArchiveWriter[source]

Writing part of DirectoryArchive.

path: pathlib.Path[source]
format: Literal['json', 'csv', 'xlsx'][source]
transform: FieldValueTransform[source]
write(directory:, *args: Any, entry_filter: DirectoryEntryFilter | None = None, query: Query[DirectoryEntry] | None = None, **kwargs: Any) None[source]

Writes the given directory.

write_directory_metadata(directory: None[source]

Writes the metadata.

write_directory_entries(directory:, entry_filter: DirectoryEntryFilter | None = None, query: Query[DirectoryEntry] | None = None) None[source]

Writes the directory entries. Allows filtering with custom entry_filter function as well as passing a query object

write_paths(session: sqlalchemy.orm.Session, paths: dict[str, str], fid_to_entry: dict[str, str] | None = None) None[source]

Writes the given files to the archive path.

  • session – The database session in use.

  • paths – A dictionary with each key being a file id and each value being a path where this file id should be written to.

  • fid_to_entry – A dictionary with the mapping of the file id to the entry name

write_json(path: pathlib.Path, data: onegov.core.types.JSON_ro) None[source]
write_xlsx(path: pathlib.Path, data: Iterable[dict[str, Any]]) None[source]
write_csv(path: pathlib.Path, data: Iterable[dict[str, Any]]) None[source]
class directory.archive.DirectoryArchive(path: _typeshed.StrPath, format: Literal['json', 'csv', 'xlsx'] = 'json', transform: FieldValueTransform | None = None)[source]

Bases: DirectoryArchiveReader, DirectoryArchiveWriter

Offers the ability to read/write a directory and its entries to a folder.


archive = DirectoryArchive('/tmp/directory')

archive = DirectoryArchive('/tmp/directory')

The archive content is as follows:

  • metadata.json (contains the directory data)

  • data.json/data.csv/data.xlsx (contains the directory entries)

  • ./<field_id>/<entry_id>.<ext> (files referenced by the directory entries)

The directory entries are stored as json, csv or xlsx. Json is preferred.

class directory.archive.DirectoryZipArchive(path: _typeshed.StrPath, *args: Any, **kwargs: Any)[source]

Offers the same interface as the DirectoryArchive, additionally zipping the folder on write and extracting the zip on read.

format: Literal['zip'] = 'zip'[source]
classmethod from_buffer(buffer: SupportsReadAndSeek) typing_extensions.Self[source]

Creates a zip archive instance from a file object in memory.

write(directory:, *args: Any, **kwargs: Any) None[source]
read(*args: Any, **kwargs: Any)[source]
compress() None[source]
extract() None[source]