Compare commits

...

55 Commits

Author SHA1 Message Date
Trenton H
2098a11eb1 Fix: text parser get_parser forwards logging_group, drops progress_callback
TextDocumentParser.__init__ accepts logging_group: object = None, same
as RemoteDocumentParser. The old shim incorrectly dropped it; fix to
forward it as a positional arg and only drop progress_callback.
Add type annotations and from __future__ import annotations for
consistency with the remote parser signals shim.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-13 12:36:24 -07:00
Trenton H
af8a8e791b Fix: get_parser factory forwards logging_group, drops progress_callback
consumer.py calls parser_class(logging_group, progress_callback=...).
RemoteDocumentParser.__init__ accepts logging_group but not
progress_callback, so only the latter is dropped — matching the pattern
established by the TextDocumentParser signals shim.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-13 12:35:16 -07:00
Trenton H
8d4163bef3 Refactor: fix type errors in remote parser and signals
- remote.py: add `if TYPE_CHECKING: assert` guards before the Azure
  client construction to narrow config.endpoint and config.api_key from
  str|None to str. The narrowing is safe: engine_is_valid() guarantees
  both are non-None when it returns True (api_key explicitly; endpoint
  via `not (engine=="azureai" and endpoint is None)` for the only valid
  engine). Asserts are wrapped in TYPE_CHECKING so they carry zero
  runtime cost.

- signals.py: add full type annotations — return types, Any-typed
  sender parameter, and explicit logging_group argument replacing *args.
  Add `from __future__ import annotations` for consistent annotation style.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-13 12:31:17 -07:00
Trenton H
e9e1d4ccca Refactor: wire RemoteDocumentParser into consumer and fix signals
- paperless_remote/signals.py: import from paperless.parsers.remote
  (new location after git mv). supported_mime_types() is now a
  classmethod that always returns the full set, so get_supported_mime_types()
  in the signal layer explicitly checks RemoteEngineConfig validity and
  returns {} when unconfigured — preserving the old behaviour where an
  unconfigured remote parser does not register for any MIME types.

- documents/consumer.py: extend the _parser_cleanup() shim, parse()
  dispatch, and get_thumbnail() dispatch to include RemoteDocumentParser
  alongside TextDocumentParser. Both new-style parsers use __exit__
  for cleanup and take (document_path, mime_type) without a file_name
  argument.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-13 12:09:33 -07:00
Trenton H
c955ba7d07 Refactor: improve remote parser test fixture structure
- make_azure_mock moved from conftest.py back into test_remote_parser.py;
  it is specific to that module and does not belong in shared fixtures
- azure_client fixture composes azure_settings + make_azure_mock + patch
  in one step; tests no longer repeat the mocker.patch call or carry an
  unused azure_settings parameter
- failing_azure_client fixture similarly composes azure_settings + patch
  with a RuntimeError side effect; TestRemoteParserParseError now only
  receives the mock it actually uses
- All @pytest.mark.parametrize calls use pytest.param with explicit ids
  (pdf, png, jpeg, ...) for readable test output

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-13 12:00:37 -07:00
Trenton H
7028bb2163 Refactor: use fixture factory and usefixtures in remote parser tests
- `_make_azure_mock` helper promoted to `make_azure_mock` factory fixture
  in conftest.py; tests call `make_azure_mock()` or
  `make_azure_mock("custom text")` instead of a module-level function
- `azure_settings` and `no_engine_settings` applied via
  `@pytest.mark.usefixtures` wherever their value is not referenced
  inside the test body; `TestRemoteParserParseError` marked at the class
  level since all three tests need the same setting

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-13 11:56:38 -07:00
Trenton H
5d4d87764c Feature: migrate RemoteDocumentParser to ParserProtocol interface
Rewrites the remote OCR parser to the new plugin system contract:

- `supported_mime_types()` is now a classmethod that always returns the
  full set of 7 MIME types; the old instance-method hack (returning {}
  when unconfigured) is removed
- `score()` classmethod returns None when no remote engine is configured
  (making the parser invisible to the registry), and 20 when active —
  higher than the tesseract default of 10 so the remote engine takes
  priority when both are available
- No longer inherits from RasterisedDocumentParser; inherits no parser
  class at all — just implements the protocol directly
- `can_produce_archive = True`; `requires_pdf_rendition = False`
- `_azure_ai_vision_parse()` takes explicit config arg; API client
  created and closed within the method
- `get_page_count()` returns the PDF page count for application/pdf,
  delegating to the new `get_page_count_for_pdf()` utility
- `extract_metadata()` delegates to `extract_pdf_metadata()` for PDFs;
  returns [] for all other MIME types

New files:
- `src/paperless/parsers/utils.py` — shared `extract_pdf_metadata()` and
  `get_page_count_for_pdf()` utilities (pikepdf-based); both the remote
  and tesseract parsers will use these going forward
- `src/paperless/tests/parsers/test_remote_parser.py` — 42 pytest-style
  tests using pytest-django `settings` and pytest-mock `mocker` fixtures
- `src/paperless/tests/parsers/conftest.py` — remote parser instance,
  sample-file, and settings-helper fixtures

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-13 11:52:11 -07:00
Trenton H
75dce7f19f Refactor: move remote parser, test, and sample to paperless.parsers
Relocates three files to their new homes in the parser plugin system:

- src/paperless_remote/parsers.py
    → src/paperless/parsers/remote.py
- src/paperless_remote/tests/test_parser.py
    → src/paperless/tests/parsers/test_remote_parser.py
- src/paperless_remote/tests/samples/simple-digital.pdf
    → src/paperless/tests/samples/remote/simple-digital.pdf

Content and imports will be updated in the follow-up commit that
rewrites the parser to the new ParserProtocol interface.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-13 11:32:34 -07:00
dependabot[bot]
365ff99934 Bump ocrmypdf from 16.13.0 to 17.3.0 in the document-processing group (#12267)
* Bump ocrmypdf from 16.13.0 to 17.3.0 in the document-processing group

Bumps the document-processing group with 1 update: [ocrmypdf](https://github.com/ocrmypdf/OCRmyPDF).


Updates `ocrmypdf` from 16.13.0 to 17.3.0
- [Release notes](https://github.com/ocrmypdf/OCRmyPDF/releases)
- [Commits](https://github.com/ocrmypdf/OCRmyPDF/compare/v16.13.0...v17.3.0)

---
updated-dependencies:
- dependency-name: ocrmypdf
  dependency-version: 17.3.0
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: document-processing
...

Signed-off-by: dependabot[bot] <support@github.com>

* Updates the argument name for v17

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Trenton H <797416+stumpylog@users.noreply.github.com>
2026-03-13 09:51:21 -07:00
Trenton H
d86cfdb088 Feature: Initial document parser plugin framework (#12294) 2026-03-12 21:53:17 +00:00
dependabot[bot]
c2e1085418 Chore(deps): Bump tornado from 6.5.4 to 6.5.5 (#12327)
Bumps [tornado](https://github.com/tornadoweb/tornado) from 6.5.4 to 6.5.5.
- [Changelog](https://github.com/tornadoweb/tornado/blob/master/docs/releases.rst)
- [Commits](https://github.com/tornadoweb/tornado/compare/v6.5.4...v6.5.5)

---
updated-dependencies:
- dependency-name: tornado
  dependency-version: 6.5.5
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-12 13:44:41 -07:00
Trenton H
ee0d1a3094 Enhancement: Make the StatusConsumer truly async (#12298) 2026-03-12 13:27:35 -07:00
Trenton H
f15394fa5c Fix: Removes the double exec that prevented migrations from running (#12317) 2026-03-12 12:46:12 -07:00
dependabot[bot]
773eb25f7d Chore(deps): Bump the utilities-minor group across 1 directory with 5 updates (#12324)
* Chore(deps): Bump the utilities-minor group across 1 directory with 5 updates

Bumps the utilities-minor group with 5 updates in the / directory:

| Package | From | To |
| --- | --- | --- |
| [drf-spectacular-sidecar](https://github.com/tfranzel/drf-spectacular-sidecar) | `2026.1.1` | `2026.3.1` |
| [filelock](https://github.com/tox-dev/py-filelock) | `3.20.3` | `3.25.0` |
| [scikit-learn](https://github.com/scikit-learn/scikit-learn) | `1.7.2` | `1.8.0` |
| [faker](https://github.com/joke2k/faker) | `40.5.1` | `40.8.0` |
| [pyrefly](https://github.com/facebook/pyrefly) | `0.54.0` | `0.55.0` |



Updates `drf-spectacular-sidecar` from 2026.1.1 to 2026.3.1
- [Commits](https://github.com/tfranzel/drf-spectacular-sidecar/compare/2026.1.1...2026.3.1)

Updates `filelock` from 3.20.3 to 3.25.0
- [Release notes](https://github.com/tox-dev/py-filelock/releases)
- [Changelog](https://github.com/tox-dev/filelock/blob/main/docs/changelog.rst)
- [Commits](https://github.com/tox-dev/py-filelock/compare/3.20.3...3.25.0)

Updates `scikit-learn` from 1.7.2 to 1.8.0
- [Release notes](https://github.com/scikit-learn/scikit-learn/releases)
- [Commits](https://github.com/scikit-learn/scikit-learn/compare/1.7.2...1.8.0)

Updates `faker` from 40.5.1 to 40.8.0
- [Release notes](https://github.com/joke2k/faker/releases)
- [Changelog](https://github.com/joke2k/faker/blob/master/CHANGELOG.md)
- [Commits](https://github.com/joke2k/faker/compare/v40.5.1...v40.8.0)

Updates `pyrefly` from 0.54.0 to 0.55.0
- [Release notes](https://github.com/facebook/pyrefly/releases)
- [Commits](https://github.com/facebook/pyrefly/compare/0.54.0...0.55.0)

---
updated-dependencies:
- dependency-name: drf-spectacular-sidecar
  dependency-version: 2026.3.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: utilities-minor
- dependency-name: filelock
  dependency-version: 3.25.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: utilities-minor
- dependency-name: scikit-learn
  dependency-version: 1.8.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: utilities-minor
- dependency-name: faker
  dependency-version: 40.8.0
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: utilities-minor
- dependency-name: pyrefly
  dependency-version: 0.55.0
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: utilities-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Dont know what your problem is dependabot

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: shamoon <4887959+shamoon@users.noreply.github.com>
2026-03-12 12:30:42 -07:00
dependabot[bot]
e2947ccff2 Chore(deps): Bump the pre-commit-dependencies group with 4 updates (#12323)
* Chore(deps): Bump the pre-commit-dependencies group with 4 updates

---
updated-dependencies:
- dependency-name: https://github.com/codespell-project/codespell
  dependency-version: 2.4.2
  dependency-type: direct:production
  dependency-group: pre-commit-dependencies
- dependency-name: prettier
  dependency-version: 3.8.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: pre-commit-dependencies
- dependency-name: prettier-plugin-organize-imports
  dependency-version: 4.3.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: pre-commit-dependencies
- dependency-name: https://github.com/lovesegfault/beautysh
  dependency-version: 6.4.3
  dependency-type: direct:production
  dependency-group: pre-commit-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>

* Drop this, it seems more trouble than its worth

* Re-run prek with new prettier

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: shamoon <4887959+shamoon@users.noreply.github.com>
2026-03-12 16:29:57 +00:00
dependabot[bot]
61841a767b Chore(deps): Bump the actions group with 3 updates (#12322)
Bumps the actions group with 3 updates: [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action), [docker/login-action](https://github.com/docker/login-action) and [actions/setup-node](https://github.com/actions/setup-node).


Updates `docker/setup-buildx-action` from 3.12.0 to 4.0.0
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](https://github.com/docker/setup-buildx-action/compare/v3.12.0...v4.0.0)

Updates `docker/login-action` from 3.7.0 to 4.0.0
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](https://github.com/docker/login-action/compare/v3.7.0...v4.0.0)

Updates `actions/setup-node` from 6.2.0 to 6.3.0
- [Release notes](https://github.com/actions/setup-node/releases)
- [Commits](https://github.com/actions/setup-node/compare/v6.2.0...v6.3.0)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-version: 4.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: actions
- dependency-name: docker/login-action
  dependency-version: 4.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: actions
- dependency-name: actions/setup-node
  dependency-version: 6.3.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Trenton H <797416+stumpylog@users.noreply.github.com>
2026-03-12 09:04:22 -07:00
GitHub Actions
15db023caa Auto translate strings 2026-03-12 15:44:21 +00:00
shamoon
45b363659e Chore: mark document detail email action as deprecated (#12308) 2026-03-12 15:42:14 +00:00
Trenton H
7494161c95 Add dependency groups for pre-commit dependencies 2026-03-12 08:04:21 -07:00
Trenton H
5331312699 Remove cooldown for pre-commit updates (it's not supported)
Removed the default cooldown period for pre-commit updates.
2026-03-12 07:59:27 -07:00
Trenton H
b5a002b8ed Chore: Enable dependabot for pre-commit (#12305) 2026-03-12 07:52:43 -07:00
shamoon
dd8573242d Update api version for frontend dev server 2026-03-12 01:24:38 -07:00
Trenton H
86fa74c115 Fix: Postgres selection, DBENGINE and migrations (#12299) 2026-03-11 11:54:24 -07:00
shamoon
b7b9e83f37 Fix (dev): include DatePipe in BulkEditor unit test 2026-03-11 00:01:06 -07:00
GitHub Actions
217b5df591 Auto translate strings 2026-03-10 23:47:25 +00:00
shamoon
3efc9a5733 Fix: use effective content for matching and suggestion content (#12293) 2026-03-10 23:45:56 +00:00
shamoon
e19f341974 Fix: Pin filelock to ~=3.20.3 (#12297) 2026-03-10 13:38:23 -07:00
GitHub Actions
2b4ea570ef Auto translate strings 2026-03-10 18:58:20 +00:00
shamoon
86573fc1a0 Chore: separate actions from bulk edit endpoint (#12286) 2026-03-10 18:55:36 +00:00
dependabot[bot]
3856ec19c0 docker(deps): bump astral-sh/uv (#12265)
Bumps [astral-sh/uv](https://github.com/astral-sh/uv) from 0.10.7-python3.12-trixie-slim to 0.10.8-python3.12-trixie-slim.
- [Release notes](https://github.com/astral-sh/uv/releases)
- [Changelog](https://github.com/astral-sh/uv/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/uv/compare/0.10.7...0.10.8)

---
updated-dependencies:
- dependency-name: astral-sh/uv
  dependency-version: 0.10.8-python3.12-trixie-slim
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-10 17:27:06 +00:00
GitHub Actions
1221e7f21c Auto translate strings 2026-03-09 22:37:56 +00:00
shamoon
3e32e90355 Breaking: drop support for api versions < 9 (#12284) 2026-03-09 22:36:22 +00:00
Trenton H
63cb75564e Chore: Remove some further old items (encryption passphrase and PNG handling) (#12290) 2026-03-09 22:04:51 +00:00
dependabot[bot]
6955d6c07f Chore(deps): Bump the utilities-patch group across 1 directory with 6 updates (#12291)
* Chore(deps): Bump the utilities-patch group across 1 directory with 6 updates

Bumps the utilities-patch group with 6 updates in the / directory:

| Package | From | To |
| --- | --- | --- |
| llama-index-embeddings-openai | `0.5.1` | `0.5.2` |
| llama-index-llms-openai | `0.6.21` | `0.6.26` |
| [python-dotenv](https://github.com/theskumar/python-dotenv) | `1.2.1` | `1.2.2` |
| [regex](https://github.com/mrabarnett/mrab-regex) | `2026.2.19` | `2026.2.28` |
| [prek](https://github.com/j178/prek) | `0.3.3` | `0.3.5` |
| [ruff](https://github.com/astral-sh/ruff) | `0.15.4` | `0.15.5` |



Updates `llama-index-embeddings-openai` from 0.5.1 to 0.5.2

Updates `llama-index-llms-openai` from 0.6.21 to 0.6.26

Updates `python-dotenv` from 1.2.1 to 1.2.2
- [Release notes](https://github.com/theskumar/python-dotenv/releases)
- [Changelog](https://github.com/theskumar/python-dotenv/blob/main/CHANGELOG.md)
- [Commits](https://github.com/theskumar/python-dotenv/compare/v1.2.1...v1.2.2)

Updates `regex` from 2026.2.19 to 2026.2.28
- [Changelog](https://github.com/mrabarnett/mrab-regex/blob/hg/changelog.txt)
- [Commits](https://github.com/mrabarnett/mrab-regex/compare/2026.2.19...2026.2.28)

Updates `prek` from 0.3.3 to 0.3.5
- [Release notes](https://github.com/j178/prek/releases)
- [Changelog](https://github.com/j178/prek/blob/master/CHANGELOG.md)
- [Commits](https://github.com/j178/prek/compare/v0.3.3...v0.3.5)

Updates `ruff` from 0.15.4 to 0.15.5
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.15.4...0.15.5)

---
updated-dependencies:
- dependency-name: llama-index-embeddings-openai
  dependency-version: 0.5.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: utilities-patch
- dependency-name: llama-index-llms-openai
  dependency-version: 0.6.26
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: utilities-patch
- dependency-name: python-dotenv
  dependency-version: 1.2.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: utilities-patch
- dependency-name: regex
  dependency-version: 2026.2.28
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: utilities-patch
- dependency-name: prek
  dependency-version: 0.3.5
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: utilities-patch
- dependency-name: ruff
  dependency-version: 0.15.5
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: utilities-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Update .pre-commit-config.yaml

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: shamoon <4887959+shamoon@users.noreply.github.com>
2026-03-09 19:47:19 +00:00
shamoon
d85ee29976 Fix ci gate base 2026-03-09 11:16:46 -07:00
GitHub Actions
0c7d56c5e7 Auto translate strings 2026-03-09 17:45:53 +00:00
Trenton H
0bcf904e3a Chore: Finish settings refactor (#12263) 2026-03-09 17:43:51 +00:00
Trenton H
bcc2f11152 Performance: Stream JSON during import for memory improvements (#12276)
* Perf: stream manifest parsing with ijson in document_importer

Replace bulk json.load of the full manifest (which materializes the
entire JSON array into memory) with incremental ijson streaming.
Eliminates self.manifest entirely — records are never all in memory
at once.

- Add ijson>=3.2 dependency
- New module-level iter_manifest_records() generator
- load_manifest_files() collects paths only; no parsing at load time
- check_manifest_validity() streams without accumulating records
- decrypt_secret_fields() streams each manifest to a .decrypted.json
  temp file record-by-record; temp files cleaned up after file copy
- _import_files_from_manifest() collects only document records (small
  fraction of manifest) for the tqdm progress bar

Measured on 200 docs + 200 CustomFieldInstances:
- Streaming validation: peak memory 3081 KiB -> 333 KiB (89% reduction)
- Stream-decrypt to file: peak memory 3081 KiB -> 549 KiB (82% reduction)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Perf: slim dict in _import_files_from_manifest, discard fields

When collecting document records for the file-copy step, extract only
the 4 keys the loop actually uses (pk + 3 exported filename keys) and
discard the full fields dict (content, checksum, tags, etc.).

Peak memory for the document-record list: 939 KiB -> 375 KiB (60% reduction).
Wall time unchanged.
2026-03-09 10:20:48 -07:00
shamoon
e18b1fd99d Chore: use unified "gates" for ci tests and docs checks (#12277) 2026-03-09 17:02:34 +00:00
Trenton H
e30676f889 Feature: Migrate import/export to rich progress (#12260)
* Refactor: migrate exporter/importer from tqdm to PaperlessCommand.track()

Replace direct tqdm usage in document_exporter and document_importer with
the PaperlessCommand base class and its track() method, which is backed by
Rich and handles --no-progress-bar automatically. Also removes the unused
ProgressBarMixin from mixins.py.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Refactor: add explicit supports_progress_bar and supports_multiprocessing to all PaperlessCommand subclasses

Each management command now explicitly declares both class attributes
rather than relying on defaults, making intent unambiguous at a glance.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-09 08:59:17 -07:00
Martin Kleine
2a28549c5a Documentation: Update development commands and pnpm for Angular build commands (#12283)
---------

Co-authored-by: shamoon <4887959+shamoon@users.noreply.github.com>
2026-03-09 07:06:16 -07:00
GitHub Actions
4badf0e7c2 Auto translate strings 2026-03-09 01:52:08 +00:00
Paul Gessinger
bc26d94593 Chore: Add saved view compatibility in API version 9 (#12280)
---------

Co-authored-by: shamoon <4887959+shamoon@users.noreply.github.com>
2026-03-08 18:50:31 -07:00
shamoon
93cbbf34b7 Merge branch 'main' into dev 2026-03-07 23:30:08 -08:00
shamoon
1e8622494d Documentation: remove broken link 2026-03-07 23:29:42 -08:00
GitHub Actions
0c3298f030 Auto translate strings 2026-03-08 03:06:59 +00:00
Sven-Hendrik Haase
2b288c094d Enhancement: Show correspondent in document merge dialog (#12271)
---------

Co-authored-by: shamoon <4887959+shamoon@users.noreply.github.com>
2026-03-07 19:05:28 -08:00
Trenton H
2cdb1424ef Performance: Further export memory improvements (#12273)
* Perf: streaming manifest writer for document exporter (Phase 3)

Replaces the in-memory manifest dict accumulation with a
StreamingManifestWriter that writes records to manifest.json
incrementally, keeping only one batch resident in memory at a time.

Key changes:
- Add StreamingManifestWriter: writes to .tmp atomically, BLAKE2b
  compare for --compare-json, discard() on exception
- Add _encrypt_record_inline(): per-record encryption replacing the
  bulk encrypt_secret_fields() call; crypto setup moved before streaming
- Add _write_split_manifest(): extracted per-document manifest writing
- Refactor dump(): non-doc records streamed during transaction, documents
  accumulated then written after filenames are assigned
- Upgrade check_and_write_json() from MD5 to BLAKE2b
- Remove encrypt_secret_fields() and unused itertools.chain import
- Add profiling marker to pyproject.toml

Measured improvement (200 docs + 200 CustomFieldInstances, same
dump() code path, only writer differs):
- Peak memory: ~50% reduction
- Memory delta: ~70% reduction
- Wall time and query count: unchanged

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Refactor: O(1) lookup table for CRYPT_FIELDS in per-record encryption

Add CRYPT_FIELDS_BY_MODEL to CryptMixin, derived from CRYPT_FIELDS at
class definition time. _encrypt_record_inline() now does a single dict
lookup instead of a linear scan per record, eliminating the loop and
break pattern.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-07 14:24:50 -08:00
Trenton H
f5c0c21922 Chore: Lazy imports of the heavy AI modules (#12275) 2026-03-07 12:53:22 -08:00
Trenton H
91ddda9256 Fix: Uploaded digest artifact name for Docker build (#12272) 2026-03-06 13:15:45 -08:00
Trenton H
9d5e618de8 Chore: pytest style paperless tests (#12254) 2026-03-06 13:04:23 -08:00
Trenton H
50ae49c7da Chore: Uploads the digests as just files, no zips (#12264) 2026-03-06 12:56:34 -08:00
shamoon
ba023ef332 Chore: Add anti-slop job to PR workflow (#12248) 2026-03-06 20:36:24 +00:00
GitHub Actions
7345f2e81c Auto translate strings 2026-03-06 20:01:12 +00:00
shamoon
731448a8f9 Fixhancement: support version-specific edits (#12233) 2026-03-06 11:59:26 -08:00
146 changed files with 14246 additions and 9716 deletions

View File

@@ -12,6 +12,8 @@ updates:
open-pull-requests-limit: 10
schedule:
interval: "monthly"
cooldown:
default-days: 7
labels:
- "frontend"
- "dependencies"
@@ -36,7 +38,9 @@ updates:
directory: "/"
# Check for updates once a week
schedule:
interval: "weekly"
interval: "monthly"
cooldown:
default-days: 7
labels:
- "backend"
- "dependencies"
@@ -97,6 +101,8 @@ updates:
schedule:
# Check for updates to GitHub Actions every month
interval: "monthly"
cooldown:
default-days: 7
labels:
- "ci-cd"
- "dependencies"
@@ -112,7 +118,9 @@ updates:
- "/"
- "/.devcontainer/"
schedule:
interval: "weekly"
interval: "monthly"
cooldown:
default-days: 7
open-pull-requests-limit: 5
labels:
- "dependencies"
@@ -123,7 +131,9 @@ updates:
- package-ecosystem: "docker-compose"
directory: "/docker/compose/"
schedule:
interval: "weekly"
interval: "monthly"
cooldown:
default-days: 7
open-pull-requests-limit: 5
labels:
- "dependencies"
@@ -147,3 +157,11 @@ updates:
postgres:
patterns:
- "docker.io/library/postgres*"
- package-ecosystem: "pre-commit" # See documentation for possible values
directory: "/" # Location of package manifests
schedule:
interval: "monthly"
groups:
pre-commit-dependencies:
patterns:
- "*"

View File

@@ -3,21 +3,9 @@ on:
push:
branches-ignore:
- 'translations**'
paths:
- 'src/**'
- 'pyproject.toml'
- 'uv.lock'
- 'docker/compose/docker-compose.ci-test.yml'
- '.github/workflows/ci-backend.yml'
pull_request:
branches-ignore:
- 'translations**'
paths:
- 'src/**'
- 'pyproject.toml'
- 'uv.lock'
- 'docker/compose/docker-compose.ci-test.yml'
- '.github/workflows/ci-backend.yml'
workflow_dispatch:
concurrency:
group: backend-${{ github.event.pull_request.number || github.ref }}
@@ -26,7 +14,55 @@ env:
DEFAULT_UV_VERSION: "0.10.x"
NLTK_DATA: "/usr/share/nltk_data"
jobs:
changes:
name: Detect Backend Changes
runs-on: ubuntu-slim
outputs:
backend_changed: ${{ steps.force.outputs.run_all == 'true' || steps.filter.outputs.backend == 'true' }}
steps:
- name: Checkout
uses: actions/checkout@v6.0.2
with:
fetch-depth: 0
- name: Decide run mode
id: force
run: |
if [[ "${{ github.event_name }}" == "workflow_dispatch" ]]; then
echo "run_all=true" >> "$GITHUB_OUTPUT"
elif [[ "${{ github.event_name }}" == "push" && ( "${{ github.ref_name }}" == "main" || "${{ github.ref_name }}" == "dev" ) ]]; then
echo "run_all=true" >> "$GITHUB_OUTPUT"
else
echo "run_all=false" >> "$GITHUB_OUTPUT"
fi
- name: Set diff range
id: range
if: steps.force.outputs.run_all != 'true'
run: |
if [[ "${{ github.event_name }}" == "pull_request" ]]; then
echo "base=${{ github.event.pull_request.base.sha }}" >> "$GITHUB_OUTPUT"
elif [[ "${{ github.event.created }}" == "true" ]]; then
echo "base=${{ github.event.repository.default_branch }}" >> "$GITHUB_OUTPUT"
else
echo "base=${{ github.event.before }}" >> "$GITHUB_OUTPUT"
fi
echo "ref=${{ github.sha }}" >> "$GITHUB_OUTPUT"
- name: Detect changes
id: filter
if: steps.force.outputs.run_all != 'true'
uses: dorny/paths-filter@v3.0.2
with:
base: ${{ steps.range.outputs.base }}
ref: ${{ steps.range.outputs.ref }}
filters: |
backend:
- 'src/**'
- 'pyproject.toml'
- 'uv.lock'
- 'docker/compose/docker-compose.ci-test.yml'
- '.github/workflows/ci-backend.yml'
test:
needs: changes
if: needs.changes.outputs.backend_changed == 'true'
name: "Python ${{ matrix.python-version }}"
runs-on: ubuntu-24.04
strategy:
@@ -100,6 +136,8 @@ jobs:
docker compose --file docker/compose/docker-compose.ci-test.yml logs
docker compose --file docker/compose/docker-compose.ci-test.yml down
typing:
needs: changes
if: needs.changes.outputs.backend_changed == 'true'
name: Check project typing
runs-on: ubuntu-24.04
env:
@@ -150,3 +188,27 @@ jobs:
--show-error-codes \
--warn-unused-configs \
src/ | uv run mypy-baseline filter
gate:
name: Backend CI Gate
needs: [changes, test, typing]
if: always()
runs-on: ubuntu-slim
steps:
- name: Check gate
run: |
if [[ "${{ needs.changes.outputs.backend_changed }}" != "true" ]]; then
echo "No backend-relevant changes detected."
exit 0
fi
if [[ "${{ needs.test.result }}" != "success" ]]; then
echo "::error::Backend test job result: ${{ needs.test.result }}"
exit 1
fi
if [[ "${{ needs.typing.result }}" != "success" ]]; then
echo "::error::Backend typing job result: ${{ needs.typing.result }}"
exit 1
fi
echo "Backend checks passed."

View File

@@ -104,9 +104,9 @@ jobs:
echo "repository=${repo_name}"
echo "name=${repo_name}" >> $GITHUB_OUTPUT
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3.12.0
uses: docker/setup-buildx-action@v4.0.0
- name: Login to GitHub Container Registry
uses: docker/login-action@v3.7.0
uses: docker/login-action@v4.0.0
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
@@ -149,15 +149,16 @@ jobs:
mkdir -p /tmp/digests
digest="${{ steps.build.outputs.digest }}"
echo "digest=${digest}"
touch "/tmp/digests/${digest#sha256:}"
echo "${digest}" > "/tmp/digests/digest-${{ matrix.arch }}.txt"
- name: Upload digest
if: steps.check-push.outputs.should-push == 'true'
uses: actions/upload-artifact@v7.0.0
with:
name: digests-${{ matrix.arch }}
path: /tmp/digests/*
path: /tmp/digests/digest-${{ matrix.arch }}.txt
if-no-files-found: error
retention-days: 1
archive: false
merge-and-push:
name: Merge and Push Manifest
runs-on: ubuntu-24.04
@@ -171,29 +172,29 @@ jobs:
uses: actions/download-artifact@v8.0.0
with:
path: /tmp/digests
pattern: digests-*
pattern: digest-*.txt
merge-multiple: true
- name: List digests
run: |
echo "Downloaded digests:"
ls -la /tmp/digests/
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3.12.0
uses: docker/setup-buildx-action@v4.0.0
- name: Login to GitHub Container Registry
uses: docker/login-action@v3.7.0
uses: docker/login-action@v4.0.0
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Docker Hub
if: needs.build-arch.outputs.push-external == 'true'
uses: docker/login-action@v3.7.0
uses: docker/login-action@v4.0.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to Quay.io
if: needs.build-arch.outputs.push-external == 'true'
uses: docker/login-action@v3.7.0
uses: docker/login-action@v4.0.0
with:
registry: quay.io
username: ${{ secrets.QUAY_USERNAME }}
@@ -217,8 +218,9 @@ jobs:
tags=$(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "${DOCKER_METADATA_OUTPUT_JSON}")
digests=""
for digest in *; do
digests+="${{ env.REGISTRY }}/${REPOSITORY}@sha256:${digest} "
for digest_file in digest-*.txt; do
digest=$(cat "${digest_file}")
digests+="${{ env.REGISTRY }}/${REPOSITORY}@${digest} "
done
echo "Creating manifest with tags: ${tags}"

View File

@@ -1,22 +1,9 @@
name: Documentation
on:
push:
branches:
- main
- dev
paths:
- 'docs/**'
- 'zensical.toml'
- 'pyproject.toml'
- 'uv.lock'
- '.github/workflows/ci-docs.yml'
branches-ignore:
- 'translations**'
pull_request:
paths:
- 'docs/**'
- 'zensical.toml'
- 'pyproject.toml'
- 'uv.lock'
- '.github/workflows/ci-docs.yml'
workflow_dispatch:
concurrency:
group: docs-${{ github.event.pull_request.number || github.ref }}
@@ -29,7 +16,55 @@ env:
DEFAULT_UV_VERSION: "0.10.x"
DEFAULT_PYTHON_VERSION: "3.12"
jobs:
changes:
name: Detect Docs Changes
runs-on: ubuntu-slim
outputs:
docs_changed: ${{ steps.force.outputs.run_all == 'true' || steps.filter.outputs.docs == 'true' }}
steps:
- name: Checkout
uses: actions/checkout@v6.0.2
with:
fetch-depth: 0
- name: Decide run mode
id: force
run: |
if [[ "${{ github.event_name }}" == "workflow_dispatch" ]]; then
echo "run_all=true" >> "$GITHUB_OUTPUT"
elif [[ "${{ github.event_name }}" == "push" && ( "${{ github.ref_name }}" == "main" || "${{ github.ref_name }}" == "dev" ) ]]; then
echo "run_all=true" >> "$GITHUB_OUTPUT"
else
echo "run_all=false" >> "$GITHUB_OUTPUT"
fi
- name: Set diff range
id: range
if: steps.force.outputs.run_all != 'true'
run: |
if [[ "${{ github.event_name }}" == "pull_request" ]]; then
echo "base=${{ github.event.pull_request.base.sha }}" >> "$GITHUB_OUTPUT"
elif [[ "${{ github.event.created }}" == "true" ]]; then
echo "base=${{ github.event.repository.default_branch }}" >> "$GITHUB_OUTPUT"
else
echo "base=${{ github.event.before }}" >> "$GITHUB_OUTPUT"
fi
echo "ref=${{ github.sha }}" >> "$GITHUB_OUTPUT"
- name: Detect changes
id: filter
if: steps.force.outputs.run_all != 'true'
uses: dorny/paths-filter@v3.0.2
with:
base: ${{ steps.range.outputs.base }}
ref: ${{ steps.range.outputs.ref }}
filters: |
docs:
- 'docs/**'
- 'zensical.toml'
- 'pyproject.toml'
- 'uv.lock'
- '.github/workflows/ci-docs.yml'
build:
needs: changes
if: needs.changes.outputs.docs_changed == 'true'
name: Build Documentation
runs-on: ubuntu-24.04
steps:
@@ -64,8 +99,8 @@ jobs:
name: github-pages-${{ github.run_id }}-${{ github.run_attempt }}
deploy:
name: Deploy Documentation
needs: build
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
needs: [changes, build]
if: github.event_name == 'push' && github.ref == 'refs/heads/main' && needs.changes.outputs.docs_changed == 'true'
runs-on: ubuntu-24.04
environment:
name: github-pages
@@ -76,3 +111,22 @@ jobs:
id: deployment
with:
artifact_name: github-pages-${{ github.run_id }}-${{ github.run_attempt }}
gate:
name: Docs CI Gate
needs: [changes, build]
if: always()
runs-on: ubuntu-slim
steps:
- name: Check gate
run: |
if [[ "${{ needs.changes.outputs.docs_changed }}" != "true" ]]; then
echo "No docs-relevant changes detected."
exit 0
fi
if [[ "${{ needs.build.result }}" != "success" ]]; then
echo "::error::Docs build job result: ${{ needs.build.result }}"
exit 1
fi
echo "Docs checks passed."

View File

@@ -3,21 +3,60 @@ on:
push:
branches-ignore:
- 'translations**'
paths:
- 'src-ui/**'
- '.github/workflows/ci-frontend.yml'
pull_request:
branches-ignore:
- 'translations**'
paths:
- 'src-ui/**'
- '.github/workflows/ci-frontend.yml'
workflow_dispatch:
concurrency:
group: frontend-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
changes:
name: Detect Frontend Changes
runs-on: ubuntu-slim
outputs:
frontend_changed: ${{ steps.force.outputs.run_all == 'true' || steps.filter.outputs.frontend == 'true' }}
steps:
- name: Checkout
uses: actions/checkout@v6.0.2
with:
fetch-depth: 0
- name: Decide run mode
id: force
run: |
if [[ "${{ github.event_name }}" == "workflow_dispatch" ]]; then
echo "run_all=true" >> "$GITHUB_OUTPUT"
elif [[ "${{ github.event_name }}" == "push" && ( "${{ github.ref_name }}" == "main" || "${{ github.ref_name }}" == "dev" ) ]]; then
echo "run_all=true" >> "$GITHUB_OUTPUT"
else
echo "run_all=false" >> "$GITHUB_OUTPUT"
fi
- name: Set diff range
id: range
if: steps.force.outputs.run_all != 'true'
run: |
if [[ "${{ github.event_name }}" == "pull_request" ]]; then
echo "base=${{ github.event.pull_request.base.sha }}" >> "$GITHUB_OUTPUT"
elif [[ "${{ github.event.created }}" == "true" ]]; then
echo "base=${{ github.event.repository.default_branch }}" >> "$GITHUB_OUTPUT"
else
echo "base=${{ github.event.before }}" >> "$GITHUB_OUTPUT"
fi
echo "ref=${{ github.sha }}" >> "$GITHUB_OUTPUT"
- name: Detect changes
id: filter
if: steps.force.outputs.run_all != 'true'
uses: dorny/paths-filter@v3.0.2
with:
base: ${{ steps.range.outputs.base }}
ref: ${{ steps.range.outputs.ref }}
filters: |
frontend:
- 'src-ui/**'
- '.github/workflows/ci-frontend.yml'
install-dependencies:
needs: changes
if: needs.changes.outputs.frontend_changed == 'true'
name: Install Dependencies
runs-on: ubuntu-24.04
steps:
@@ -28,7 +67,7 @@ jobs:
with:
version: 10
- name: Use Node.js 24
uses: actions/setup-node@v6.2.0
uses: actions/setup-node@v6.3.0
with:
node-version: 24.x
cache: 'pnpm'
@@ -45,7 +84,8 @@ jobs:
run: cd src-ui && pnpm install
lint:
name: Lint
needs: install-dependencies
needs: [changes, install-dependencies]
if: needs.changes.outputs.frontend_changed == 'true'
runs-on: ubuntu-24.04
steps:
- name: Checkout
@@ -55,7 +95,7 @@ jobs:
with:
version: 10
- name: Use Node.js 24
uses: actions/setup-node@v6.2.0
uses: actions/setup-node@v6.3.0
with:
node-version: 24.x
cache: 'pnpm'
@@ -73,7 +113,8 @@ jobs:
run: cd src-ui && pnpm run lint
unit-tests:
name: "Unit Tests (${{ matrix.shard-index }}/${{ matrix.shard-count }})"
needs: install-dependencies
needs: [changes, install-dependencies]
if: needs.changes.outputs.frontend_changed == 'true'
runs-on: ubuntu-24.04
strategy:
fail-fast: false
@@ -89,7 +130,7 @@ jobs:
with:
version: 10
- name: Use Node.js 24
uses: actions/setup-node@v6.2.0
uses: actions/setup-node@v6.3.0
with:
node-version: 24.x
cache: 'pnpm'
@@ -119,7 +160,8 @@ jobs:
directory: src-ui/coverage/
e2e-tests:
name: "E2E Tests (${{ matrix.shard-index }}/${{ matrix.shard-count }})"
needs: install-dependencies
needs: [changes, install-dependencies]
if: needs.changes.outputs.frontend_changed == 'true'
runs-on: ubuntu-24.04
container: mcr.microsoft.com/playwright:v1.58.2-noble
env:
@@ -139,7 +181,7 @@ jobs:
with:
version: 10
- name: Use Node.js 24
uses: actions/setup-node@v6.2.0
uses: actions/setup-node@v6.3.0
with:
node-version: 24.x
cache: 'pnpm'
@@ -159,7 +201,8 @@ jobs:
run: cd src-ui && pnpm exec playwright test --shard ${{ matrix.shard-index }}/${{ matrix.shard-count }}
bundle-analysis:
name: Bundle Analysis
needs: [unit-tests, e2e-tests]
needs: [changes, unit-tests, e2e-tests]
if: needs.changes.outputs.frontend_changed == 'true'
runs-on: ubuntu-24.04
steps:
- name: Checkout
@@ -171,7 +214,7 @@ jobs:
with:
version: 10
- name: Use Node.js 24
uses: actions/setup-node@v6.2.0
uses: actions/setup-node@v6.3.0
with:
node-version: 24.x
cache: 'pnpm'
@@ -189,3 +232,42 @@ jobs:
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
run: cd src-ui && pnpm run build --configuration=production
gate:
name: Frontend CI Gate
needs: [changes, install-dependencies, lint, unit-tests, e2e-tests, bundle-analysis]
if: always()
runs-on: ubuntu-slim
steps:
- name: Check gate
run: |
if [[ "${{ needs.changes.outputs.frontend_changed }}" != "true" ]]; then
echo "No frontend-relevant changes detected."
exit 0
fi
if [[ "${{ needs['install-dependencies'].result }}" != "success" ]]; then
echo "::error::Frontend install job result: ${{ needs['install-dependencies'].result }}"
exit 1
fi
if [[ "${{ needs.lint.result }}" != "success" ]]; then
echo "::error::Frontend lint job result: ${{ needs.lint.result }}"
exit 1
fi
if [[ "${{ needs['unit-tests'].result }}" != "success" ]]; then
echo "::error::Frontend unit-tests job result: ${{ needs['unit-tests'].result }}"
exit 1
fi
if [[ "${{ needs['e2e-tests'].result }}" != "success" ]]; then
echo "::error::Frontend e2e-tests job result: ${{ needs['e2e-tests'].result }}"
exit 1
fi
if [[ "${{ needs['bundle-analysis'].result }}" != "success" ]]; then
echo "::error::Frontend bundle-analysis job result: ${{ needs['bundle-analysis'].result }}"
exit 1
fi
echo "Frontend checks passed."

View File

@@ -35,7 +35,7 @@ jobs:
with:
version: 10
- name: Use Node.js 24
uses: actions/setup-node@v6.2.0
uses: actions/setup-node@v6.3.0
with:
node-version: 24.x
cache: 'pnpm'

View File

@@ -2,13 +2,24 @@ name: PR Bot
on:
pull_request_target:
types: [opened]
permissions:
contents: read
pull-requests: write
jobs:
anti-slop:
runs-on: ubuntu-latest
permissions:
contents: read
issues: read
pull-requests: write
steps:
- uses: peakoss/anti-slop@v0.2.1
with:
max-failures: 4
failure-add-pr-labels: 'ai'
pr-bot:
name: Automated PR Bot
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
steps:
- name: Label PR by file path or branch name
# see .github/labeler.yml for the labeler config

View File

@@ -40,7 +40,7 @@ jobs:
with:
version: 10
- name: Use Node.js 24
uses: actions/setup-node@v6.2.0
uses: actions/setup-node@v6.3.0
with:
node-version: 24.x
cache: 'pnpm'

View File

@@ -29,7 +29,7 @@ repos:
- id: check-case-conflict
- id: detect-private-key
- repo: https://github.com/codespell-project/codespell
rev: v2.4.1
rev: v2.4.2
hooks:
- id: codespell
additional_dependencies: [tomli]
@@ -46,11 +46,11 @@ repos:
- ts
- markdown
additional_dependencies:
- prettier@3.3.3
- 'prettier-plugin-organize-imports@4.1.0'
- prettier@3.8.1
- 'prettier-plugin-organize-imports@4.3.0'
# Python hooks
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.15.0
rev: v0.15.5
hooks:
- id: ruff-check
- id: ruff-format
@@ -65,7 +65,7 @@ repos:
- id: hadolint
# Shell script hooks
- repo: https://github.com/lovesegfault/beautysh
rev: v6.4.2
rev: v6.4.3
hooks:
- id: beautysh
types: [file]

View File

@@ -5,14 +5,6 @@ const config = {
singleQuote: true,
// https://prettier.io/docs/en/options.html#trailing-commas
trailingComma: 'es5',
overrides: [
{
files: ['docs/*.md'],
options: {
tabWidth: 4,
},
},
],
plugins: [require('prettier-plugin-organize-imports')],
}

View File

@@ -30,7 +30,7 @@ RUN set -eux \
# Purpose: Installs s6-overlay and rootfs
# Comments:
# - Don't leave anything extra in here either
FROM ghcr.io/astral-sh/uv:0.10.7-python3.12-trixie-slim AS s6-overlay-base
FROM ghcr.io/astral-sh/uv:0.10.9-python3.12-trixie-slim AS s6-overlay-base
WORKDIR /usr/src/s6

View File

@@ -56,6 +56,7 @@ services:
environment:
PAPERLESS_REDIS: redis://broker:6379
PAPERLESS_DBHOST: db
PAPERLESS_DBENGINE: postgres
env_file:
- stack.env
volumes:

View File

@@ -62,6 +62,7 @@ services:
environment:
PAPERLESS_REDIS: redis://broker:6379
PAPERLESS_DBHOST: db
PAPERLESS_DBENGINE: postgresql
PAPERLESS_TIKA_ENABLED: 1
PAPERLESS_TIKA_GOTENBERG_ENDPOINT: http://gotenberg:3000
PAPERLESS_TIKA_ENDPOINT: http://tika:9998

View File

@@ -56,6 +56,7 @@ services:
environment:
PAPERLESS_REDIS: redis://broker:6379
PAPERLESS_DBHOST: db
PAPERLESS_DBENGINE: postgresql
volumes:
data:
media:

View File

@@ -51,6 +51,7 @@ services:
env_file: docker-compose.env
environment:
PAPERLESS_REDIS: redis://broker:6379
PAPERLESS_DBENGINE: sqlite
PAPERLESS_TIKA_ENABLED: 1
PAPERLESS_TIKA_GOTENBERG_ENDPOINT: http://gotenberg:3000
PAPERLESS_TIKA_ENDPOINT: http://tika:9998

View File

@@ -42,6 +42,7 @@ services:
env_file: docker-compose.env
environment:
PAPERLESS_REDIS: redis://broker:6379
PAPERLESS_DBENGINE: sqlite
volumes:
data:
media:

View File

@@ -10,8 +10,10 @@ cd "${PAPERLESS_SRC_DIR}"
# The whole migrate, with flock, needs to run as the right user
if [[ -n "${USER_IS_NON_ROOT}" ]]; then
python3 manage.py check --tag compatibility paperless || exit 1
exec s6-setlock -n "${data_dir}/migration_lock" python3 manage.py migrate --skip-checks --no-input
else
s6-setuidgid paperless python3 manage.py check --tag compatibility paperless || exit 1
exec s6-setuidgid paperless \
s6-setlock -n "${data_dir}/migration_lock" \
python3 manage.py migrate --skip-checks --no-input

View File

@@ -10,16 +10,16 @@ consuming documents at that time.
Options available to any installation of paperless:
- Use the [document exporter](#exporter). The document exporter exports all your documents,
thumbnails, metadata, and database contents to a specific folder. You may import your
documents and settings into a fresh instance of paperless again or store your
documents in another DMS with this export.
- Use the [document exporter](#exporter). The document exporter exports all your documents,
thumbnails, metadata, and database contents to a specific folder. You may import your
documents and settings into a fresh instance of paperless again or store your
documents in another DMS with this export.
The document exporter is also able to update an already existing
export. Therefore, incremental backups with `rsync` are entirely
possible.
The document exporter is also able to update an already existing
export. Therefore, incremental backups with `rsync` are entirely
possible.
The exporter does not include API tokens and they will need to be re-generated after importing.
The exporter does not include API tokens and they will need to be re-generated after importing.
!!! caution
@@ -29,28 +29,27 @@ Options available to any installation of paperless:
Options available to docker installations:
- Backup the docker volumes. These usually reside within
`/var/lib/docker/volumes` on the host and you need to be root in
order to access them.
- Backup the docker volumes. These usually reside within
`/var/lib/docker/volumes` on the host and you need to be root in
order to access them.
Paperless uses 4 volumes:
- `paperless_media`: This is where your documents are stored.
- `paperless_data`: This is where auxiliary data is stored. This
folder also contains the SQLite database, if you use it.
- `paperless_pgdata`: Exists only if you use PostgreSQL and
contains the database.
- `paperless_dbdata`: Exists only if you use MariaDB and contains
the database.
Paperless uses 4 volumes:
- `paperless_media`: This is where your documents are stored.
- `paperless_data`: This is where auxiliary data is stored. This
folder also contains the SQLite database, if you use it.
- `paperless_pgdata`: Exists only if you use PostgreSQL and
contains the database.
- `paperless_dbdata`: Exists only if you use MariaDB and contains
the database.
Options available to bare-metal and non-docker installations:
- Backup the entire paperless folder. This ensures that if your
paperless instance crashes at some point or your disk fails, you can
simply copy the folder back into place and it works.
- Backup the entire paperless folder. This ensures that if your
paperless instance crashes at some point or your disk fails, you can
simply copy the folder back into place and it works.
When using PostgreSQL or MariaDB, you'll also have to backup the
database.
When using PostgreSQL or MariaDB, you'll also have to backup the
database.
### Restoring {#migrating-restoring}
@@ -509,19 +508,19 @@ collection for issues.
The issues detected by the sanity checker are as follows:
- Missing original files.
- Missing archive files.
- Inaccessible original files due to improper permissions.
- Inaccessible archive files due to improper permissions.
- Corrupted original documents by comparing their checksum against
what is stored in the database.
- Corrupted archive documents by comparing their checksum against what
is stored in the database.
- Missing thumbnails.
- Inaccessible thumbnails due to improper permissions.
- Documents without any content (warning).
- Orphaned files in the media directory (warning). These are files
that are not referenced by any document in paperless.
- Missing original files.
- Missing archive files.
- Inaccessible original files due to improper permissions.
- Inaccessible archive files due to improper permissions.
- Corrupted original documents by comparing their checksum against
what is stored in the database.
- Corrupted archive documents by comparing their checksum against what
is stored in the database.
- Missing thumbnails.
- Inaccessible thumbnails due to improper permissions.
- Documents without any content (warning).
- Orphaned files in the media directory (warning). These are files
that are not referenced by any document in paperless.
```
document_sanity_checker

View File

@@ -25,20 +25,20 @@ documents.
The following algorithms are available:
- **None:** No matching will be performed.
- **Any:** Looks for any occurrence of any word provided in match in
the PDF. If you define the match as `Bank1 Bank2`, it will match
documents containing either of these terms.
- **All:** Requires that every word provided appears in the PDF,
albeit not in the order provided.
- **Exact:** Matches only if the match appears exactly as provided
(i.e. preserve ordering) in the PDF.
- **Regular expression:** Parses the match as a regular expression and
tries to find a match within the document.
- **Fuzzy match:** Uses a partial matching based on locating the tag text
inside the document, using a [partial ratio](https://rapidfuzz.github.io/RapidFuzz/Usage/fuzz.html#partial-ratio)
- **Auto:** Tries to automatically match new documents. This does not
require you to set a match. See the [notes below](#automatic-matching).
- **None:** No matching will be performed.
- **Any:** Looks for any occurrence of any word provided in match in
the PDF. If you define the match as `Bank1 Bank2`, it will match
documents containing either of these terms.
- **All:** Requires that every word provided appears in the PDF,
albeit not in the order provided.
- **Exact:** Matches only if the match appears exactly as provided
(i.e. preserve ordering) in the PDF.
- **Regular expression:** Parses the match as a regular expression and
tries to find a match within the document.
- **Fuzzy match:** Uses a partial matching based on locating the tag text
inside the document, using a [partial ratio](https://rapidfuzz.github.io/RapidFuzz/Usage/fuzz.html#partial-ratio)
- **Auto:** Tries to automatically match new documents. This does not
require you to set a match. See the [notes below](#automatic-matching).
When using the _any_ or _all_ matching algorithms, you can search for
terms that consist of multiple words by enclosing them in double quotes.
@@ -69,33 +69,33 @@ Paperless tries to hide much of the involved complexity with this
approach. However, there are a couple caveats you need to keep in mind
when using this feature:
- Changes to your documents are not immediately reflected by the
matching algorithm. The neural network needs to be _trained_ on your
documents after changes. Paperless periodically (default: once each
hour) checks for changes and does this automatically for you.
- The Auto matching algorithm only takes documents into account which
are NOT placed in your inbox (i.e. have any inbox tags assigned to
them). This ensures that the neural network only learns from
documents which you have correctly tagged before.
- The matching algorithm can only work if there is a correlation
between the tag, correspondent, document type, or storage path and
the document itself. Your bank statements usually contain your bank
account number and the name of the bank, so this works reasonably
well, However, tags such as "TODO" cannot be automatically
assigned.
- The matching algorithm needs a reasonable number of documents to
identify when to assign tags, correspondents, storage paths, and
types. If one out of a thousand documents has the correspondent
"Very obscure web shop I bought something five years ago", it will
probably not assign this correspondent automatically if you buy
something from them again. The more documents, the better.
- Paperless also needs a reasonable amount of negative examples to
decide when not to assign a certain tag, correspondent, document
type, or storage path. This will usually be the case as you start
filling up paperless with documents. Example: If all your documents
are either from "Webshop" or "Bank", paperless will assign one
of these correspondents to ANY new document, if both are set to
automatic matching.
- Changes to your documents are not immediately reflected by the
matching algorithm. The neural network needs to be _trained_ on your
documents after changes. Paperless periodically (default: once each
hour) checks for changes and does this automatically for you.
- The Auto matching algorithm only takes documents into account which
are NOT placed in your inbox (i.e. have any inbox tags assigned to
them). This ensures that the neural network only learns from
documents which you have correctly tagged before.
- The matching algorithm can only work if there is a correlation
between the tag, correspondent, document type, or storage path and
the document itself. Your bank statements usually contain your bank
account number and the name of the bank, so this works reasonably
well, However, tags such as "TODO" cannot be automatically
assigned.
- The matching algorithm needs a reasonable number of documents to
identify when to assign tags, correspondents, storage paths, and
types. If one out of a thousand documents has the correspondent
"Very obscure web shop I bought something five years ago", it will
probably not assign this correspondent automatically if you buy
something from them again. The more documents, the better.
- Paperless also needs a reasonable amount of negative examples to
decide when not to assign a certain tag, correspondent, document
type, or storage path. This will usually be the case as you start
filling up paperless with documents. Example: If all your documents
are either from "Webshop" or "Bank", paperless will assign one
of these correspondents to ANY new document, if both are set to
automatic matching.
## Hooking into the consumption process {#consume-hooks}
@@ -243,12 +243,12 @@ webserver:
Troubleshooting:
- Monitor the Docker Compose log
`cd ~/paperless-ngx; docker compose logs -f`
- Check your script's permission e.g. in case of permission error
`sudo chmod 755 post-consumption-example.sh`
- Pipe your scripts's output to a log file e.g.
`echo "${DOCUMENT_ID}" | tee --append /usr/src/paperless/scripts/post-consumption-example.log`
- Monitor the Docker Compose log
`cd ~/paperless-ngx; docker compose logs -f`
- Check your script's permission e.g. in case of permission error
`sudo chmod 755 post-consumption-example.sh`
- Pipe your scripts's output to a log file e.g.
`echo "${DOCUMENT_ID}" | tee --append /usr/src/paperless/scripts/post-consumption-example.log`
## File name handling {#file-name-handling}
@@ -307,35 +307,35 @@ will create a directory structure as follows:
Paperless provides the following variables for use within filenames:
- `{{ asn }}`: The archive serial number of the document, or "none".
- `{{ correspondent }}`: The name of the correspondent, or "none".
- `{{ document_type }}`: The name of the document type, or "none".
- `{{ tag_list }}`: A comma separated list of all tags assigned to the
document.
- `{{ title }}`: The title of the document.
- `{{ created }}`: The full date (ISO 8601 format, e.g. `2024-03-14`) the document was created.
- `{{ created_year }}`: Year created only, formatted as the year with
century.
- `{{ created_year_short }}`: Year created only, formatted as the year
without century, zero padded.
- `{{ created_month }}`: Month created only (number 01-12).
- `{{ created_month_name }}`: Month created name, as per locale
- `{{ created_month_name_short }}`: Month created abbreviated name, as per
locale
- `{{ created_day }}`: Day created only (number 01-31).
- `{{ added }}`: The full date (ISO format) the document was added to
paperless.
- `{{ added_year }}`: Year added only.
- `{{ added_year_short }}`: Year added only, formatted as the year without
century, zero padded.
- `{{ added_month }}`: Month added only (number 01-12).
- `{{ added_month_name }}`: Month added name, as per locale
- `{{ added_month_name_short }}`: Month added abbreviated name, as per
locale
- `{{ added_day }}`: Day added only (number 01-31).
- `{{ owner_username }}`: Username of document owner, if any, or "none"
- `{{ original_name }}`: Document original filename, minus the extension, if any, or "none"
- `{{ doc_pk }}`: The paperless identifier (primary key) for the document.
- `{{ asn }}`: The archive serial number of the document, or "none".
- `{{ correspondent }}`: The name of the correspondent, or "none".
- `{{ document_type }}`: The name of the document type, or "none".
- `{{ tag_list }}`: A comma separated list of all tags assigned to the
document.
- `{{ title }}`: The title of the document.
- `{{ created }}`: The full date (ISO 8601 format, e.g. `2024-03-14`) the document was created.
- `{{ created_year }}`: Year created only, formatted as the year with
century.
- `{{ created_year_short }}`: Year created only, formatted as the year
without century, zero padded.
- `{{ created_month }}`: Month created only (number 01-12).
- `{{ created_month_name }}`: Month created name, as per locale
- `{{ created_month_name_short }}`: Month created abbreviated name, as per
locale
- `{{ created_day }}`: Day created only (number 01-31).
- `{{ added }}`: The full date (ISO format) the document was added to
paperless.
- `{{ added_year }}`: Year added only.
- `{{ added_year_short }}`: Year added only, formatted as the year without
century, zero padded.
- `{{ added_month }}`: Month added only (number 01-12).
- `{{ added_month_name }}`: Month added name, as per locale
- `{{ added_month_name_short }}`: Month added abbreviated name, as per
locale
- `{{ added_day }}`: Day added only (number 01-31).
- `{{ owner_username }}`: Username of document owner, if any, or "none"
- `{{ original_name }}`: Document original filename, minus the extension, if any, or "none"
- `{{ doc_pk }}`: The paperless identifier (primary key) for the document.
!!! warning
@@ -388,10 +388,10 @@ before empty placeholders are removed as well, empty directories are omitted.
When a single storage layout is not sufficient for your use case, storage paths allow for more complex
structure to set precisely where each document is stored in the file system.
- Each storage path is a [`PAPERLESS_FILENAME_FORMAT`](configuration.md#PAPERLESS_FILENAME_FORMAT) and
follows the rules described above
- Each document is assigned a storage path using the matching algorithms described above, but can be
overwritten at any time
- Each storage path is a [`PAPERLESS_FILENAME_FORMAT`](configuration.md#PAPERLESS_FILENAME_FORMAT) and
follows the rules described above
- Each document is assigned a storage path using the matching algorithms described above, but can be
overwritten at any time
For example, you could define the following two storage paths:
@@ -457,13 +457,13 @@ The `get_cf_value` filter retrieves a value from custom field data with optional
###### Parameters
- `custom_fields`: This _must_ be the provided custom field data
- `name` (str): Name of the custom field to retrieve
- `default` (str, optional): Default value to return if field is not found or has no value
- `custom_fields`: This _must_ be the provided custom field data
- `name` (str): Name of the custom field to retrieve
- `default` (str, optional): Default value to return if field is not found or has no value
###### Returns
- `str | None`: The field value, default value, or `None` if neither exists
- `str | None`: The field value, default value, or `None` if neither exists
###### Examples
@@ -487,12 +487,12 @@ The `datetime` filter formats a datetime string or datetime object using Python'
###### Parameters
- `value` (str | datetime): Date/time value to format (strings will be parsed automatically)
- `format` (str): Python strftime format string
- `value` (str | datetime): Date/time value to format (strings will be parsed automatically)
- `format` (str): Python strftime format string
###### Returns
- `str`: Formatted datetime string
- `str`: Formatted datetime string
###### Examples
@@ -525,13 +525,13 @@ An ISO string can also be provided to control the output format.
###### Parameters
- `value` (date | datetime | str): Date, datetime object or ISO string to format (datetime should be timezone-aware)
- `format` (str): Format type - either a Babel preset ('short', 'medium', 'long', 'full') or custom pattern
- `locale` (str): Locale code for localization (e.g., 'en_US', 'fr_FR', 'de_DE')
- `value` (date | datetime | str): Date, datetime object or ISO string to format (datetime should be timezone-aware)
- `format` (str): Format type - either a Babel preset ('short', 'medium', 'long', 'full') or custom pattern
- `locale` (str): Locale code for localization (e.g., 'en_US', 'fr_FR', 'de_DE')
###### Returns
- `str`: Localized, formatted date string
- `str`: Localized, formatted date string
###### Examples
@@ -565,15 +565,15 @@ See the [supported format codes](https://unicode.org/reports/tr35/tr35-dates.htm
### Format Presets
- **short**: Abbreviated format (e.g., "1/15/24")
- **medium**: Medium-length format (e.g., "Jan 15, 2024")
- **long**: Long format with full month name (e.g., "January 15, 2024")
- **full**: Full format including day of week (e.g., "Monday, January 15, 2024")
- **short**: Abbreviated format (e.g., "1/15/24")
- **medium**: Medium-length format (e.g., "Jan 15, 2024")
- **long**: Long format with full month name (e.g., "January 15, 2024")
- **full**: Full format including day of week (e.g., "Monday, January 15, 2024")
#### Additional Variables
- `{{ tag_name_list }}`: A list of tag names applied to the document, ordered by the tag name. Note this is a list, not a single string
- `{{ custom_fields }}`: A mapping of custom field names to their type and value. A user can access the mapping by field name or check if a field is applied by checking its existence in the variable.
- `{{ tag_name_list }}`: A list of tag names applied to the document, ordered by the tag name. Note this is a list, not a single string
- `{{ custom_fields }}`: A mapping of custom field names to their type and value. A user can access the mapping by field name or check if a field is applied by checking its existence in the variable.
!!! tip
@@ -675,15 +675,15 @@ installation, you can use volumes to accomplish this:
```yaml
services:
# ...
webserver:
environment:
- PAPERLESS_ENABLE_FLOWER
ports:
- 5555:5555 # (2)!
# ...
webserver:
environment:
- PAPERLESS_ENABLE_FLOWER
ports:
- 5555:5555 # (2)!
# ...
volumes:
- /path/to/my/flowerconfig.py:/usr/src/paperless/src/paperless/flowerconfig.py:ro # (1)!
volumes:
- /path/to/my/flowerconfig.py:/usr/src/paperless/src/paperless/flowerconfig.py:ro # (1)!
```
1. Note the `:ro` tag means the file will be mounted as read only.
@@ -714,11 +714,11 @@ For example, using Docker Compose:
```yaml
services:
# ...
webserver:
# ...
webserver:
# ...
volumes:
- /path/to/my/scripts:/custom-cont-init.d:ro # (1)!
volumes:
- /path/to/my/scripts:/custom-cont-init.d:ro # (1)!
```
1. Note the `:ro` tag means the folder will be mounted as read only. This is for extra security against changes
@@ -771,16 +771,16 @@ Paperless is able to utilize barcodes for automatically performing some tasks.
At this time, the library utilized for detection of barcodes supports the following types:
- AN-13/UPC-A
- UPC-E
- EAN-8
- Code 128
- Code 93
- Code 39
- Codabar
- Interleaved 2 of 5
- QR Code
- SQ Code
- AN-13/UPC-A
- UPC-E
- EAN-8
- Code 128
- Code 93
- Code 39
- Codabar
- Interleaved 2 of 5
- QR Code
- SQ Code
For usage in Paperless, the type of barcode does not matter, only the contents of it.
@@ -793,8 +793,8 @@ below.
If document splitting is enabled, Paperless splits _after_ a separator barcode by default.
This means:
- any page containing the configured separator barcode starts a new document, starting with the **next** page
- pages containing the separator barcode are discarded
- any page containing the configured separator barcode starts a new document, starting with the **next** page
- pages containing the separator barcode are discarded
This is intended for dedicated separator sheets such as PATCH-T pages.
@@ -831,10 +831,10 @@ to `true`.
When enabled, documents will be split at pages containing tag barcodes, similar to how
ASN barcodes work. Key features:
- The page with the tag barcode is **retained** in the resulting document
- **Each split document extracts its own tags** - only tags on pages within that document are assigned
- Multiple tag barcodes can trigger multiple splits in the same document
- Works seamlessly with ASN barcodes - each split document gets its own ASN and tags
- The page with the tag barcode is **retained** in the resulting document
- **Each split document extracts its own tags** - only tags on pages within that document are assigned
- Multiple tag barcodes can trigger multiple splits in the same document
- Works seamlessly with ASN barcodes - each split document gets its own ASN and tags
This is useful for batch scanning where you place tag barcode pages between different
documents to both separate and categorize them in a single operation.
@@ -996,9 +996,9 @@ If using docker, you'll need to add the following volume mounts to your `docker-
```yaml
webserver:
volumes:
- /home/user/.gnupg/pubring.gpg:/usr/src/paperless/.gnupg/pubring.gpg
- <path to gpg-agent socket>:/usr/src/paperless/.gnupg/S.gpg-agent
volumes:
- /home/user/.gnupg/pubring.gpg:/usr/src/paperless/.gnupg/pubring.gpg
- <path to gpg-agent socket>:/usr/src/paperless/.gnupg/S.gpg-agent
```
For a 'bare-metal' installation no further configuration is necessary. If you
@@ -1006,9 +1006,9 @@ want to use a separate `GNUPG_HOME`, you can do so by configuring the [PAPERLESS
### Troubleshooting
- Make sure, that `gpg-agent` is running on your host machine
- Make sure, that encryption and decryption works from inside the container using the `gpg` commands from above.
- Check that all files in `/usr/src/paperless/.gnupg` have correct permissions
- Make sure, that `gpg-agent` is running on your host machine
- Make sure, that encryption and decryption works from inside the container using the `gpg` commands from above.
- Check that all files in `/usr/src/paperless/.gnupg` have correct permissions
```shell
paperless@9da1865df327:~/.gnupg$ ls -al

View File

@@ -66,10 +66,10 @@ Full text searching is available on the `/api/documents/` endpoint. Two
specific query parameters cause the API to return full text search
results:
- `/api/documents/?query=your%20search%20query`: Search for a document
using a full text query. For details on the syntax, see [Basic Usage - Searching](usage.md#basic-usage_searching).
- `/api/documents/?more_like_id=1234`: Search for documents similar to
the document with id 1234.
- `/api/documents/?query=your%20search%20query`: Search for a document
using a full text query. For details on the syntax, see [Basic Usage - Searching](usage.md#basic-usage_searching).
- `/api/documents/?more_like_id=1234`: Search for documents similar to
the document with id 1234.
Pagination works exactly the same as it does for normal requests on this
endpoint.
@@ -106,12 +106,12 @@ attribute with various information about the search results:
}
```
- `score` is an indication how well this document matches the query
relative to the other search results.
- `highlights` is an excerpt from the document content and highlights
the search terms with `<span>` tags as shown above.
- `rank` is the index of the search results. The first result will
have rank 0.
- `score` is an indication how well this document matches the query
relative to the other search results.
- `highlights` is an excerpt from the document content and highlights
the search terms with `<span>` tags as shown above.
- `rank` is the index of the search results. The first result will
have rank 0.
### Filtering by custom fields
@@ -122,33 +122,33 @@ use cases:
1. Documents with a custom field "due" (date) between Aug 1, 2024 and
Sept 1, 2024 (inclusive):
`?custom_field_query=["due", "range", ["2024-08-01", "2024-09-01"]]`
`?custom_field_query=["due", "range", ["2024-08-01", "2024-09-01"]]`
2. Documents with a custom field "customer" (text) that equals "bob"
(case sensitive):
`?custom_field_query=["customer", "exact", "bob"]`
`?custom_field_query=["customer", "exact", "bob"]`
3. Documents with a custom field "answered" (boolean) set to `true`:
`?custom_field_query=["answered", "exact", true]`
`?custom_field_query=["answered", "exact", true]`
4. Documents with a custom field "favorite animal" (select) set to either
"cat" or "dog":
`?custom_field_query=["favorite animal", "in", ["cat", "dog"]]`
`?custom_field_query=["favorite animal", "in", ["cat", "dog"]]`
5. Documents with a custom field "address" (text) that is empty:
`?custom_field_query=["OR", [["address", "isnull", true], ["address", "exact", ""]]]`
`?custom_field_query=["OR", [["address", "isnull", true], ["address", "exact", ""]]]`
6. Documents that don't have a field called "foo":
`?custom_field_query=["foo", "exists", false]`
`?custom_field_query=["foo", "exists", false]`
7. Documents that have document links "references" to both document 3 and 7:
`?custom_field_query=["references", "contains", [3, 7]]`
`?custom_field_query=["references", "contains", [3, 7]]`
All field types support basic operations including `exact`, `in`, `isnull`,
and `exists`. String, URL, and monetary fields support case-insensitive
@@ -164,8 +164,8 @@ Get auto completions for a partial search term.
Query parameters:
- `term`: The incomplete term.
- `limit`: Amount of results. Defaults to 10.
- `term`: The incomplete term.
- `limit`: Amount of results. Defaults to 10.
Results returned by the endpoint are ordered by importance of the term
in the document index. The first result is the term that has the highest
@@ -189,19 +189,19 @@ from there.
The endpoint supports the following optional form fields:
- `title`: Specify a title that the consumer should use for the
document.
- `created`: Specify a DateTime where the document was created (e.g.
"2016-04-19" or "2016-04-19 06:15:00+02:00").
- `correspondent`: Specify the ID of a correspondent that the consumer
should use for the document.
- `document_type`: Similar to correspondent.
- `storage_path`: Similar to correspondent.
- `tags`: Similar to correspondent. Specify this multiple times to
have multiple tags added to the document.
- `archive_serial_number`: An optional archive serial number to set.
- `custom_fields`: Either an array of custom field ids to assign (with an empty
value) to the document or an object mapping field id -> value.
- `title`: Specify a title that the consumer should use for the
document.
- `created`: Specify a DateTime where the document was created (e.g.
"2016-04-19" or "2016-04-19 06:15:00+02:00").
- `correspondent`: Specify the ID of a correspondent that the consumer
should use for the document.
- `document_type`: Similar to correspondent.
- `storage_path`: Similar to correspondent.
- `tags`: Similar to correspondent. Specify this multiple times to
have multiple tags added to the document.
- `archive_serial_number`: An optional archive serial number to set.
- `custom_fields`: Either an array of custom field ids to assign (with an empty
value) to the document or an object mapping field id -> value.
The endpoint will immediately return HTTP 200 if the document consumption
process was started successfully, with the UUID of the consumption task
@@ -215,16 +215,16 @@ consumption including the ID of a created document if consumption succeeded.
Document versions are file-level versions linked to one root document.
- Root document metadata (title, tags, correspondent, document type, storage path, custom fields, permissions) remains shared.
- Version-specific file data (file, mime type, checksums, archive info, extracted text content) belongs to the selected/latest version.
- Root document metadata (title, tags, correspondent, document type, storage path, custom fields, permissions) remains shared.
- Version-specific file data (file, mime type, checksums, archive info, extracted text content) belongs to the selected/latest version.
Version-aware endpoints:
- `GET /api/documents/{id}/`: returns root document data; `content` resolves to latest version content by default. Use `?version={version_id}` to resolve content for a specific version.
- `PATCH /api/documents/{id}/`: content updates target the selected version (`?version={version_id}`) or latest version by default; non-content metadata updates target the root document.
- `GET /api/documents/{id}/download/`, `GET /api/documents/{id}/preview/`, `GET /api/documents/{id}/thumb/`, `GET /api/documents/{id}/metadata/`: accept `?version={version_id}`.
- `POST /api/documents/{id}/update_version/`: uploads a new version using multipart form field `document` and optional `version_label`.
- `DELETE /api/documents/{root_id}/versions/{version_id}/`: deletes a non-root version.
- `GET /api/documents/{id}/`: returns root document data; `content` resolves to latest version content by default. Use `?version={version_id}` to resolve content for a specific version.
- `PATCH /api/documents/{id}/`: content updates target the selected version (`?version={version_id}`) or latest version by default; non-content metadata updates target the root document.
- `GET /api/documents/{id}/download/`, `GET /api/documents/{id}/preview/`, `GET /api/documents/{id}/thumb/`, `GET /api/documents/{id}/metadata/`: accept `?version={version_id}`.
- `POST /api/documents/{id}/update_version/`: uploads a new version using multipart form field `document` and optional `version_label`.
- `DELETE /api/documents/{root_id}/versions/{version_id}/`: deletes a non-root version.
## Permissions
@@ -282,74 +282,38 @@ a json payload of the format:
The following methods are supported:
- `set_correspondent`
- Requires `parameters`: `{ "correspondent": CORRESPONDENT_ID }`
- `set_document_type`
- Requires `parameters`: `{ "document_type": DOCUMENT_TYPE_ID }`
- `set_storage_path`
- Requires `parameters`: `{ "storage_path": STORAGE_PATH_ID }`
- `add_tag`
- Requires `parameters`: `{ "tag": TAG_ID }`
- `remove_tag`
- Requires `parameters`: `{ "tag": TAG_ID }`
- `modify_tags`
- Requires `parameters`: `{ "add_tags": [LIST_OF_TAG_IDS] }` and `{ "remove_tags": [LIST_OF_TAG_IDS] }`
- `delete`
- No `parameters` required
- `reprocess`
- No `parameters` required
- `set_permissions`
- Requires `parameters`:
- `"set_permissions": PERMISSIONS_OBJ` (see format [above](#permissions)) and / or
- `"owner": OWNER_ID or null`
- `"merge": true or false` (defaults to false)
- The `merge` flag determines if the supplied permissions will overwrite all existing permissions (including
removing them) or be merged with existing permissions.
- `edit_pdf`
- Requires `parameters`:
- `"doc_ids": [DOCUMENT_ID]` A list of a single document ID to edit.
- `"operations": [OPERATION, ...]` A list of operations to perform on the documents. Each operation is a dictionary
with the following keys:
- `"page": PAGE_NUMBER` The page number to edit (1-based).
- `"rotate": DEGREES` Optional rotation in degrees (90, 180, 270).
- `"doc": OUTPUT_DOCUMENT_INDEX` Optional index of the output document for split operations.
- Optional `parameters`:
- `"delete_original": true` to delete the original documents after editing.
- `"update_document": true` to add the edited PDF as a new version of the root document.
- `"include_metadata": true` to copy metadata from the original document to the edited document.
- `remove_password`
- Requires `parameters`:
- `"password": "PASSWORD_STRING"` The password to remove from the PDF documents.
- Optional `parameters`:
- `"update_document": true` to add the password-less PDF as a new version of the root document.
- `"delete_original": true` to delete the original document after editing.
- `"include_metadata": true` to copy metadata from the original document to the new password-less document.
- `merge`
- No additional `parameters` required.
- The ordering of the merged document is determined by the list of IDs.
- Optional `parameters`:
- `"metadata_document_id": DOC_ID` apply metadata (tags, correspondent, etc.) from this document to the merged document.
- `"delete_originals": true` to delete the original documents. This requires the calling user being the owner of
all documents that are merged.
- `split`
- Requires `parameters`:
- `"pages": [..]` The list should be a list of pages and/or a ranges, separated by commas e.g. `"[1,2-3,4,5-7]"`
- Optional `parameters`:
- `"delete_originals": true` to delete the original document after consumption. This requires the calling user being the owner of
the document.
- The split operation only accepts a single document.
- `rotate`
- Requires `parameters`:
- `"degrees": DEGREES`. Must be an integer i.e. 90, 180, 270
- `delete_pages`
- Requires `parameters`:
- `"pages": [..]` The list should be a list of integers e.g. `"[2,3,4]"`
- The delete_pages operation only accepts a single document.
- `modify_custom_fields`
- Requires `parameters`:
- `"add_custom_fields": { CUSTOM_FIELD_ID: VALUE }`: JSON object consisting of custom field id:value pairs to add to the document, can also be a list of custom field IDs
to add with empty values.
- `"remove_custom_fields": [CUSTOM_FIELD_ID]`: custom field ids to remove from the document.
- `set_correspondent`
- Requires `parameters`: `{ "correspondent": CORRESPONDENT_ID }`
- `set_document_type`
- Requires `parameters`: `{ "document_type": DOCUMENT_TYPE_ID }`
- `set_storage_path`
- Requires `parameters`: `{ "storage_path": STORAGE_PATH_ID }`
- `add_tag`
- Requires `parameters`: `{ "tag": TAG_ID }`
- `remove_tag`
- Requires `parameters`: `{ "tag": TAG_ID }`
- `modify_tags`
- Requires `parameters`: `{ "add_tags": [LIST_OF_TAG_IDS] }` and `{ "remove_tags": [LIST_OF_TAG_IDS] }`
- `delete`
- No `parameters` required
- `reprocess`
- No `parameters` required
- `set_permissions`
- Requires `parameters`:
- `"set_permissions": PERMISSIONS_OBJ` (see format [above](#permissions)) and / or
- `"owner": OWNER_ID or null`
- `"merge": true or false` (defaults to false)
- The `merge` flag determines if the supplied permissions will overwrite all existing permissions (including
removing them) or be merged with existing permissions.
- `modify_custom_fields`
- Requires `parameters`:
- `"add_custom_fields": { CUSTOM_FIELD_ID: VALUE }`: JSON object consisting of custom field id:value pairs to add to the document, can also be a list of custom field IDs
to add with empty values.
- `"remove_custom_fields": [CUSTOM_FIELD_ID]`: custom field ids to remove from the document.
#### Document-editing operations
Beginning with version 10+, the API supports individual endpoints for document-editing operations (`merge`, `rotate`, `edit_pdf`, etc), thus their documentation can be found in the API spec / viewer. Legacy document-editing methods via `/api/documents/bulk_edit/` are still supported for compatibility, are deprecated and clients should migrate to the individual endpoints before they are removed in a future version.
### Objects
@@ -369,41 +333,38 @@ operations, using the endpoint: `/api/bulk_edit_objects/`, which requires a json
## API Versioning
The REST API is versioned since Paperless-ngx 1.3.0.
The REST API is versioned.
- Versioning ensures that changes to the API don't break older
clients.
- Clients specify the specific version of the API they wish to use
with every request and Paperless will handle the request using the
specified API version.
- Even if the underlying data model changes, older API versions will
always serve compatible data.
- If no version is specified, Paperless will serve version 1 to ensure
compatibility with older clients that do not request a specific API
version.
- Versioning ensures that changes to the API don't break older
clients.
- Clients specify the specific version of the API they wish to use
with every request and Paperless will handle the request using the
specified API version.
- Even if the underlying data model changes, supported older API
versions continue to serve compatible data.
- If no version is specified, Paperless serves the configured default
API version (currently `10`).
- Supported API versions are currently `9` and `10`.
API versions are specified by submitting an additional HTTP `Accept`
header with every request:
```
Accept: application/json; version=6
Accept: application/json; version=10
```
If an invalid version is specified, Paperless 1.3.0 will respond with
"406 Not Acceptable" and an error message in the body. Earlier
versions of Paperless will serve API version 1 regardless of whether a
version is specified via the `Accept` header.
If an invalid version is specified, Paperless responds with
`406 Not Acceptable` and an error message in the body.
If a client wishes to verify whether it is compatible with any given
server, the following procedure should be performed:
1. Perform an _authenticated_ request against any API endpoint. If the
server is on version 1.3.0 or newer, the server will add two custom
headers to the response:
1. Perform an _authenticated_ request against any API endpoint. The
server will add two custom headers to the response:
```
X-Api-Version: 2
X-Version: 1.3.0
X-Api-Version: 10
X-Version: <server-version>
```
2. Determine whether the client is compatible with this server based on
@@ -423,51 +384,56 @@ Initial API version.
#### Version 2
- Added field `Tag.color`. This read/write string field contains a hex
color such as `#a6cee3`.
- Added read-only field `Tag.text_color`. This field contains the text
color to use for a specific tag, which is either black or white
depending on the brightness of `Tag.color`.
- Removed field `Tag.colour`.
- Added field `Tag.color`. This read/write string field contains a hex
color such as `#a6cee3`.
- Added read-only field `Tag.text_color`. This field contains the text
color to use for a specific tag, which is either black or white
depending on the brightness of `Tag.color`.
- Removed field `Tag.colour`.
#### Version 3
- Permissions endpoints have been added.
- The format of the `/api/ui_settings/` has changed.
- Permissions endpoints have been added.
- The format of the `/api/ui_settings/` has changed.
#### Version 4
- Consumption templates were refactored to workflows and API endpoints
changed as such.
- Consumption templates were refactored to workflows and API endpoints
changed as such.
#### Version 5
- Added bulk deletion methods for documents and objects.
- Added bulk deletion methods for documents and objects.
#### Version 6
- Moved acknowledge tasks endpoint to be under `/api/tasks/acknowledge/`.
- Moved acknowledge tasks endpoint to be under `/api/tasks/acknowledge/`.
#### Version 7
- The format of select type custom fields has changed to return the options
as an array of objects with `id` and `label` fields as opposed to a simple
list of strings. When creating or updating a custom field value of a
document for a select type custom field, the value should be the `id` of
the option whereas previously was the index of the option.
- The format of select type custom fields has changed to return the options
as an array of objects with `id` and `label` fields as opposed to a simple
list of strings. When creating or updating a custom field value of a
document for a select type custom field, the value should be the `id` of
the option whereas previously was the index of the option.
#### Version 8
- The user field of document notes now returns a simplified user object
rather than just the user ID.
- The user field of document notes now returns a simplified user object
rather than just the user ID.
#### Version 9
- The document `created` field is now a date, not a datetime. The
`created_date` field is considered deprecated and will be removed in a
future version.
- The document `created` field is now a date, not a datetime. The
`created_date` field is considered deprecated and will be removed in a
future version.
#### Version 10
- The `show_on_dashboard` and `show_in_sidebar` fields of saved views have been
removed. Relevant settings are now stored in the UISettings model.
- The `show_on_dashboard` and `show_in_sidebar` fields of saved views have been
removed. Relevant settings are now stored in the UISettings model. Compatibility is maintained
for versions < 10 until support for API v9 is dropped.
- Document-editing operations such as `merge`, `rotate`, and `edit_pdf` have been
moved from the bulk edit endpoint to their own individual endpoints. Using these methods via
the bulk edit endpoint is still supported for compatibility with versions < 10 until support
for API v9 is dropped.

File diff suppressed because it is too large Load Diff

View File

@@ -8,17 +8,17 @@ common [OCR](#ocr) related settings and some frontend settings. If set, these wi
preference over the settings via environment variables. If not set, the environment setting
or applicable default will be utilized instead.
- If you run paperless on docker, `paperless.conf` is not used.
Rather, configure paperless by copying necessary options to
`docker-compose.env`.
- If you run paperless on docker, `paperless.conf` is not used.
Rather, configure paperless by copying necessary options to
`docker-compose.env`.
- If you are running paperless on anything else, paperless will search
for the configuration file in these locations and use the first one
it finds:
- The environment variable `PAPERLESS_CONFIGURATION_PATH`
- `/path/to/paperless/paperless.conf`
- `/etc/paperless.conf`
- `/usr/local/etc/paperless.conf`
- If you are running paperless on anything else, paperless will search
for the configuration file in these locations and use the first one
it finds:
- The environment variable `PAPERLESS_CONFIGURATION_PATH`
- `/path/to/paperless/paperless.conf`
- `/etc/paperless.conf`
- `/usr/local/etc/paperless.conf`
## Required services

View File

@@ -6,23 +6,23 @@ on Paperless-ngx.
Check out the source from GitHub. The repository is organized in the
following way:
- `main` always represents the latest release and will only see
changes when a new release is made.
- `dev` contains the code that will be in the next release.
- `feature-X` contains bigger changes that will be in some release, but
not necessarily the next one.
- `main` always represents the latest release and will only see
changes when a new release is made.
- `dev` contains the code that will be in the next release.
- `feature-X` contains bigger changes that will be in some release, but
not necessarily the next one.
When making functional changes to Paperless-ngx, _always_ make your changes
on the `dev` branch.
Apart from that, the folder structure is as follows:
- `docs/` - Documentation.
- `src-ui/` - Code of the front end.
- `src/` - Code of the back end.
- `scripts/` - Various scripts that help with different parts of
development.
- `docker/` - Files required to build the docker image.
- `docs/` - Documentation.
- `src-ui/` - Code of the front end.
- `src/` - Code of the back end.
- `scripts/` - Various scripts that help with different parts of
development.
- `docker/` - Files required to build the docker image.
## Contributing to Paperless-ngx
@@ -75,13 +75,13 @@ first-time setup.
4. Install the Python dependencies:
```bash
$ uv sync --group dev
uv sync --group dev
```
5. Install pre-commit hooks:
```bash
$ uv run prek install
uv run prek install
```
6. Apply migrations and create a superuser (also can be done via the web UI) for your development instance:
@@ -89,23 +89,22 @@ first-time setup.
```bash
# src/
$ uv run manage.py migrate
$ uv run manage.py createsuperuser
uv run manage.py migrate
uv run manage.py createsuperuser
```
7. You can now either ...
- install Redis or
- install Redis or
- use the included `scripts/start_services.sh` to use Docker to fire
up a Redis instance (and some other services such as Tika,
Gotenberg and a database server) or
- use the included `scripts/start_services.sh` to use Docker to fire
up a Redis instance (and some other services such as Tika,
Gotenberg and a database server) or
- spin up a bare Redis container
- spin up a bare Redis container
```
docker run -d -p 6379:6379 --restart unless-stopped redis:latest
```
```bash
docker run -d -p 6379:6379 --restart unless-stopped redis:latest
```
8. Continue with either back-end or front-end development or both :-).
@@ -118,18 +117,18 @@ work well for development, but you can use whatever you want.
Configure the IDE to use the `src/`-folder as the base source folder.
Configure the following launch configurations in your IDE:
- `python3 manage.py runserver`
- `python3 manage.py document_consumer`
- `celery --app paperless worker -l DEBUG` (or any other log level)
- `uv run manage.py runserver`
- `uv run manage.py document_consumer`
- `uv run celery --app paperless worker -l DEBUG` (or any other log level)
To start them all:
```bash
# src/
$ python3 manage.py runserver & \
python3 manage.py document_consumer & \
celery --app paperless worker -l DEBUG
uv run manage.py runserver & \
uv run manage.py document_consumer & \
uv run celery --app paperless worker -l DEBUG
```
You might need the front end to test your back end code.
@@ -140,17 +139,17 @@ To build the front end once use this command:
```bash
# src-ui/
$ pnpm install
$ ng build --configuration production
pnpm install
pnpm ng build --configuration production
```
### Testing
- Run `pytest` in the `src/` directory to execute all tests. This also
generates a HTML coverage report. When running tests, `paperless.conf`
is loaded as well. However, the tests rely on the default
configuration. This is not ideal. But for now, make sure no settings
except for DEBUG are overridden when testing.
- Run `pytest` in the `src/` directory to execute all tests. This also
generates a HTML coverage report. When running tests, `paperless.conf`
is loaded as well. However, the tests rely on the default
configuration. This is not ideal. But for now, make sure no settings
except for DEBUG are overridden when testing.
!!! note
@@ -199,7 +198,7 @@ The front end is built using AngularJS. In order to get started, you need Node.j
4. You can launch a development server by running:
```bash
ng serve
pnpm ng serve
```
This will automatically update whenever you save. However, in-place
@@ -217,21 +216,21 @@ commit. See [above](#code-formatting-with-pre-commit-hooks) for installation ins
command such as
```bash
$ git ls-files -- '*.ts' | xargs prek run prettier --files
git ls-files -- '*.ts' | xargs uv run prek run prettier --files
```
Front end testing uses Jest and Playwright. Unit tests and e2e tests,
respectively, can be run non-interactively with:
```bash
$ ng test
$ npx playwright test
pnpm ng test
pnpm playwright test
```
Playwright also includes a UI which can be run with:
```bash
$ npx playwright test --ui
pnpm playwright test --ui
```
### Building the frontend
@@ -239,7 +238,7 @@ $ npx playwright test --ui
In order to build the front end and serve it as part of Django, execute:
```bash
$ ng build --configuration production
pnpm ng build --configuration production
```
This will build the front end and put it in a location from which the
@@ -254,14 +253,14 @@ these parts have to be translated separately.
### Front end localization
- The AngularJS front end does localization according to the [Angular
documentation](https://angular.io/guide/i18n).
- The source language of the project is "en_US".
- The source strings end up in the file `src-ui/messages.xlf`.
- The translated strings need to be placed in the
`src-ui/src/locale/` folder.
- In order to extract added or changed strings from the source files,
call `ng extract-i18n`.
- The AngularJS front end does localization according to the [Angular
documentation](https://angular.io/guide/i18n).
- The source language of the project is "en_US".
- The source strings end up in the file `src-ui/messages.xlf`.
- The translated strings need to be placed in the
`src-ui/src/locale/` folder.
- In order to extract added or changed strings from the source files,
call `ng extract-i18n`.
Adding new languages requires adding the translated files in the
`src-ui/src/locale/` folder and adjusting a couple files.
@@ -307,18 +306,18 @@ A majority of the strings that appear in the back end appear only when
the admin is used. However, some of these are still shown on the front
end (such as error messages).
- The django application does localization according to the [Django
documentation](https://docs.djangoproject.com/en/3.1/topics/i18n/translation/).
- The source language of the project is "en_US".
- Localization files end up in the folder `src/locale/`.
- In order to extract strings from the application, call
`python3 manage.py makemessages -l en_US`. This is important after
making changes to translatable strings.
- The message files need to be compiled for them to show up in the
application. Call `python3 manage.py compilemessages` to do this.
The generated files don't get committed into git, since these are
derived artifacts. The build pipeline takes care of executing this
command.
- The django application does localization according to the [Django
documentation](https://docs.djangoproject.com/en/3.1/topics/i18n/translation/).
- The source language of the project is "en_US".
- Localization files end up in the folder `src/locale/`.
- In order to extract strings from the application, call
`uv run manage.py makemessages -l en_US`. This is important after
making changes to translatable strings.
- The message files need to be compiled for them to show up in the
application. Call `uv run manage.py compilemessages` to do this.
The generated files don't get committed into git, since these are
derived artifacts. The build pipeline takes care of executing this
command.
Adding new languages requires adding the translated files in the
`src/locale/`-folder and adjusting the file
@@ -381,10 +380,10 @@ base code.
Paperless-ngx uses parsers to add documents. A parser is
responsible for:
- Retrieving the content from the original
- Creating a thumbnail
- _optional:_ Retrieving a created date from the original
- _optional:_ Creating an archived document from the original
- Retrieving the content from the original
- Creating a thumbnail
- _optional:_ Retrieving a created date from the original
- _optional:_ Creating an archived document from the original
Custom parsers can be added to Paperless-ngx to support more file types. In
order to do that, you need to write the parser itself and announce its
@@ -442,17 +441,17 @@ def myparser_consumer_declaration(sender, **kwargs):
}
```
- `parser` is a reference to a class that extends `DocumentParser`.
- `weight` is used whenever two or more parsers are able to parse a
file: The parser with the higher weight wins. This can be used to
override the parsers provided by Paperless-ngx.
- `mime_types` is a dictionary. The keys are the mime types your
parser supports and the value is the default file extension that
Paperless-ngx should use when storing files and serving them for
download. We could guess that from the file extensions, but some
mime types have many extensions associated with them and the Python
methods responsible for guessing the extension do not always return
the same value.
- `parser` is a reference to a class that extends `DocumentParser`.
- `weight` is used whenever two or more parsers are able to parse a
file: The parser with the higher weight wins. This can be used to
override the parsers provided by Paperless-ngx.
- `mime_types` is a dictionary. The keys are the mime types your
parser supports and the value is the default file extension that
Paperless-ngx should use when storing files and serving them for
download. We could guess that from the file extensions, but some
mime types have many extensions associated with them and the Python
methods responsible for guessing the extension do not always return
the same value.
## Using Visual Studio Code devcontainer
@@ -471,9 +470,8 @@ To get started:
2. VS Code will prompt you with "Reopen in container". Do so and wait for the environment to start.
3. In case your host operating system is Windows:
- The Source Control view in Visual Studio Code might show: "The detected Git repository is potentially unsafe as the folder is owned by someone other than the current user." Use "Manage Unsafe Repositories" to fix this.
- Git might have detecteded modifications for all files, because Windows is using CRLF line endings. Run `git checkout .` in the containers terminal to fix this issue.
- The Source Control view in Visual Studio Code might show: "The detected Git repository is potentially unsafe as the folder is owned by someone other than the current user." Use "Manage Unsafe Repositories" to fix this.
- Git might have detecteded modifications for all files, because Windows is using CRLF line endings. Run `git checkout .` in the containers terminal to fix this issue.
4. Initialize the project by running the task **Project Setup: Run all Init Tasks**. This
will initialize the database tables and create a superuser. Then you can compile the front end
@@ -538,12 +536,12 @@ class MyDateParserPlugin(DateParserPluginBase):
Your parser instance is initialized with a `DateParserConfig` object accessible via `self.config`. This provides:
- `languages: list[str]` - List of language codes for date parsing
- `timezone_str: str` - Timezone string for date localization
- `ignore_dates: set[datetime.date]` - Dates that should be filtered out
- `reference_time: datetime.datetime` - Current time for filtering future dates
- `filename_date_order: str | None` - Date order preference for filenames (e.g., "DMY", "MDY")
- `content_date_order: str` - Date order preference for content
- `languages: list[str]` - List of language codes for date parsing
- `timezone_str: str` - Timezone string for date localization
- `ignore_dates: set[datetime.date]` - Dates that should be filtered out
- `reference_time: datetime.datetime` - Current time for filtering future dates
- `filename_date_order: str | None` - Date order preference for filenames (e.g., "DMY", "MDY")
- `content_date_order: str` - Date order preference for content
The base class provides two helper methods you can use:

View File

@@ -44,28 +44,28 @@ system. On Linux, chances are high that this location is
You can always drag those files out of that folder to use them
elsewhere. Here are a couple notes about that.
- Paperless-ngx never modifies your original documents. It keeps
checksums of all documents and uses a scheduled sanity checker to
check that they remain the same.
- By default, paperless uses the internal ID of each document as its
filename. This might not be very convenient for export. However, you
can adjust the way files are stored in paperless by
[configuring the filename format](advanced_usage.md#file-name-handling).
- [The exporter](administration.md#exporter) is
another easy way to get your files out of paperless with reasonable
file names.
- Paperless-ngx never modifies your original documents. It keeps
checksums of all documents and uses a scheduled sanity checker to
check that they remain the same.
- By default, paperless uses the internal ID of each document as its
filename. This might not be very convenient for export. However, you
can adjust the way files are stored in paperless by
[configuring the filename format](advanced_usage.md#file-name-handling).
- [The exporter](administration.md#exporter) is
another easy way to get your files out of paperless with reasonable
file names.
## _What file types does paperless-ngx support?_
**A:** Currently, the following files are supported:
- PDF documents, PNG images, JPEG images, TIFF images, GIF images and
WebP images are processed with OCR and converted into PDF documents.
- Plain text documents are supported as well and are added verbatim to
paperless.
- With the optional Tika integration enabled (see [Tika configuration](https://docs.paperless-ngx.com/configuration#tika)),
Paperless also supports various Office documents (.docx, .doc, odt,
.ppt, .pptx, .odp, .xls, .xlsx, .ods).
- PDF documents, PNG images, JPEG images, TIFF images, GIF images and
WebP images are processed with OCR and converted into PDF documents.
- Plain text documents are supported as well and are added verbatim to
paperless.
- With the optional Tika integration enabled (see [Tika configuration](https://docs.paperless-ngx.com/configuration#tika)),
Paperless also supports various Office documents (.docx, .doc, odt,
.ppt, .pptx, .odp, .xls, .xlsx, .ods).
Paperless-ngx determines the type of a file by inspecting its content
rather than its file extensions. However, files processed via the

View File

@@ -28,36 +28,36 @@ physical documents into a searchable online archive so you can keep, well, _less
## Features
- **Organize and index** your scanned documents with tags, correspondents, types, and more.
- _Your_ data is stored locally on _your_ server and is never transmitted or shared in any way, unless you explicitly choose to do so.
- Performs **OCR** on your documents, adding searchable and selectable text, even to documents scanned with only images.
- Utilizes the open-source Tesseract engine to recognize more than 100 languages.
- _New!_ Supports remote OCR with Azure AI (opt-in).
- Documents are saved as PDF/A format which is designed for long term storage, alongside the unaltered originals.
- Uses machine-learning to automatically add tags, correspondents and document types to your documents.
- **New**: Paperless-ngx can now leverage AI (Large Language Models or LLMs) for document suggestions. This is an optional feature that can be enabled (and is disabled by default).
- Supports PDF documents, images, plain text files, Office documents (Word, Excel, PowerPoint, and LibreOffice equivalents)[^1] and more.
- Paperless stores your documents plain on disk. Filenames and folders are managed by paperless and their format can be configured freely with different configurations assigned to different documents.
- **Beautiful, modern web application** that features:
- Customizable dashboard with statistics.
- Filtering by tags, correspondents, types, and more.
- Bulk editing of tags, correspondents, types and more.
- Drag-and-drop uploading of documents throughout the app.
- Customizable views can be saved and displayed on the dashboard and / or sidebar.
- Support for custom fields of various data types.
- Shareable public links with optional expiration.
- **Full text search** helps you find what you need:
- Auto completion suggests relevant words from your documents.
- Results are sorted by relevance to your search query.
- Highlighting shows you which parts of the document matched the query.
- Searching for similar documents ("More like this")
- **Email processing**[^1]: import documents from your email accounts:
- Configure multiple accounts and rules for each account.
- After processing, paperless can perform actions on the messages such as marking as read, deleting and more.
- A built-in robust **multi-user permissions** system that supports 'global' permissions as well as per document or object.
- A powerful workflow system that gives you even more control.
- **Optimized** for multi core systems: Paperless-ngx consumes multiple documents in parallel.
- The integrated sanity checker makes sure that your document archive is in good health.
- **Organize and index** your scanned documents with tags, correspondents, types, and more.
- _Your_ data is stored locally on _your_ server and is never transmitted or shared in any way, unless you explicitly choose to do so.
- Performs **OCR** on your documents, adding searchable and selectable text, even to documents scanned with only images.
- Utilizes the open-source Tesseract engine to recognize more than 100 languages.
- _New!_ Supports remote OCR with Azure AI (opt-in).
- Documents are saved as PDF/A format which is designed for long term storage, alongside the unaltered originals.
- Uses machine-learning to automatically add tags, correspondents and document types to your documents.
- **New**: Paperless-ngx can now leverage AI (Large Language Models or LLMs) for document suggestions. This is an optional feature that can be enabled (and is disabled by default).
- Supports PDF documents, images, plain text files, Office documents (Word, Excel, PowerPoint, and LibreOffice equivalents)[^1] and more.
- Paperless stores your documents plain on disk. Filenames and folders are managed by paperless and their format can be configured freely with different configurations assigned to different documents.
- **Beautiful, modern web application** that features:
- Customizable dashboard with statistics.
- Filtering by tags, correspondents, types, and more.
- Bulk editing of tags, correspondents, types and more.
- Drag-and-drop uploading of documents throughout the app.
- Customizable views can be saved and displayed on the dashboard and / or sidebar.
- Support for custom fields of various data types.
- Shareable public links with optional expiration.
- **Full text search** helps you find what you need:
- Auto completion suggests relevant words from your documents.
- Results are sorted by relevance to your search query.
- Highlighting shows you which parts of the document matched the query.
- Searching for similar documents ("More like this")
- **Email processing**[^1]: import documents from your email accounts:
- Configure multiple accounts and rules for each account.
- After processing, paperless can perform actions on the messages such as marking as read, deleting and more.
- A built-in robust **multi-user permissions** system that supports 'global' permissions as well as per document or object.
- A powerful workflow system that gives you even more control.
- **Optimized** for multi core systems: Paperless-ngx consumes multiple documents in parallel.
- The integrated sanity checker makes sure that your document archive is in good health.
[^1]: Office document and email consumption support is optional and provided by Apache Tika (see [configuration](https://docs.paperless-ngx.com/configuration/#tika))

View File

@@ -42,12 +42,12 @@ The `CONSUMER_BARCODE_SCANNER` setting has been removed. zxing-cpp is now the on
### Action Required
- If you were already using `CONSUMER_BARCODE_SCANNER=ZXING`, simply remove the setting.
- If you had `CONSUMER_BARCODE_SCANNER=PYZBAR` or were using the default, no functional changes are needed beyond
removing the setting. zxing-cpp supports all the same barcode formats and you should see improved detection
reliability.
- The `libzbar0` / `libzbar-dev` system packages are no longer required and can be removed from any custom Docker
images or host installations.
- If you were already using `CONSUMER_BARCODE_SCANNER=ZXING`, simply remove the setting.
- If you had `CONSUMER_BARCODE_SCANNER=PYZBAR` or were using the default, no functional changes are needed beyond
removing the setting. zxing-cpp supports all the same barcode formats and you should see improved detection
reliability.
- The `libzbar0` / `libzbar-dev` system packages are no longer required and can be removed from any custom Docker
images or host installations.
## Database Engine

View File

@@ -44,8 +44,8 @@ account. In short, it automates the [Docker Compose setup](#docker) described be
#### Prerequisites
- Docker and Docker Compose must be [installed](https://docs.docker.com/engine/install/){:target="\_blank"}.
- macOS users will need [GNU sed](https://formulae.brew.sh/formula/gnu-sed) with support for running as `sed` as well as [wget](https://formulae.brew.sh/formula/wget).
- Docker and Docker Compose must be [installed](https://docs.docker.com/engine/install/){:target="\_blank"}.
- macOS users will need [GNU sed](https://formulae.brew.sh/formula/gnu-sed) with support for running as `sed` as well as [wget](https://formulae.brew.sh/formula/wget).
#### Run the installation script
@@ -63,7 +63,7 @@ credentials you provided during the installation script.
#### Prerequisites
- Docker and Docker Compose must be [installed](https://docs.docker.com/engine/install/){:target="\_blank"}.
- Docker and Docker Compose must be [installed](https://docs.docker.com/engine/install/){:target="\_blank"}.
#### Installation
@@ -101,7 +101,7 @@ credentials you provided during the installation script.
```yaml
ports:
- 8010:8000
- 8010:8000
```
3. Modify `docker-compose.env` with any configuration options you need.
@@ -145,11 +145,11 @@ a [superuser](usage.md#superusers) account.
If you want to run Paperless as a rootless container, make this
change in `docker-compose.yml`:
- Set the `user` running the container to map to the `paperless`
user in the container. This value (`user_id` below) should be
the same ID that `USERMAP_UID` and `USERMAP_GID` are set to in
`docker-compose.env`. See `USERMAP_UID` and `USERMAP_GID`
[here](configuration.md#docker).
- Set the `user` running the container to map to the `paperless`
user in the container. This value (`user_id` below) should be
the same ID that `USERMAP_UID` and `USERMAP_GID` are set to in
`docker-compose.env`. See `USERMAP_UID` and `USERMAP_GID`
[here](configuration.md#docker).
Your entry for Paperless should contain something like:
@@ -171,26 +171,25 @@ to enable polling and disable inotify. See [here](configuration.md#polling).
#### Prerequisites
- Paperless runs on Linux only, Windows is not supported.
- Python 3.11, 3.12, 3.13, or 3.14 is required. As a policy, Paperless-ngx aims to support at least the three most recent Python versions and drops support for versions as they reach end-of-life. Newer versions may work, but some dependencies may not be fully compatible.
- Paperless runs on Linux only, Windows is not supported.
- Python 3.11, 3.12, 3.13, or 3.14 is required. As a policy, Paperless-ngx aims to support at least the three most recent Python versions and drops support for versions as they reach end-of-life. Newer versions may work, but some dependencies may not be fully compatible.
#### Installation
1. Install dependencies. Paperless requires the following packages:
- `python3`
- `python3-pip`
- `python3-dev`
- `default-libmysqlclient-dev` for MariaDB
- `pkg-config` for mysqlclient (python dependency)
- `fonts-liberation` for generating thumbnails for plain text
files
- `imagemagick` >= 6 for PDF conversion
- `gnupg` for handling encrypted documents
- `libpq-dev` for PostgreSQL
- `libmagic-dev` for mime type detection
- `mariadb-client` for MariaDB compile time
- `poppler-utils` for barcode detection
- `python3`
- `python3-pip`
- `python3-dev`
- `default-libmysqlclient-dev` for MariaDB
- `pkg-config` for mysqlclient (python dependency)
- `fonts-liberation` for generating thumbnails for plain text
files
- `imagemagick` >= 6 for PDF conversion
- `gnupg` for handling encrypted documents
- `libpq-dev` for PostgreSQL
- `libmagic-dev` for mime type detection
- `mariadb-client` for MariaDB compile time
- `poppler-utils` for barcode detection
Use this list for your preferred package management:
@@ -200,18 +199,17 @@ to enable polling and disable inotify. See [here](configuration.md#polling).
These dependencies are required for OCRmyPDF, which is used for text
recognition.
- `unpaper`
- `ghostscript`
- `icc-profiles-free`
- `qpdf`
- `liblept5`
- `libxml2`
- `pngquant` (suggested for certain PDF image optimizations)
- `zlib1g`
- `tesseract-ocr` >= 4.0.0 for OCR
- `tesseract-ocr` language packs (`tesseract-ocr-eng`,
`tesseract-ocr-deu`, etc)
- `unpaper`
- `ghostscript`
- `icc-profiles-free`
- `qpdf`
- `liblept5`
- `libxml2`
- `pngquant` (suggested for certain PDF image optimizations)
- `zlib1g`
- `tesseract-ocr` >= 4.0.0 for OCR
- `tesseract-ocr` language packs (`tesseract-ocr-eng`,
`tesseract-ocr-deu`, etc)
Use this list for your preferred package management:
@@ -220,16 +218,14 @@ to enable polling and disable inotify. See [here](configuration.md#polling).
```
On Raspberry Pi, these libraries are required as well:
- `libatlas-base-dev`
- `libxslt1-dev`
- `mime-support`
- `libatlas-base-dev`
- `libxslt1-dev`
- `mime-support`
You will also need these for installing some of the python dependencies:
- `build-essential`
- `python3-setuptools`
- `python3-wheel`
- `build-essential`
- `python3-setuptools`
- `python3-wheel`
Use this list for your preferred package management:
@@ -279,44 +275,41 @@ to enable polling and disable inotify. See [here](configuration.md#polling).
6. Configure Paperless-ngx. See [configuration](configuration.md) for details.
Edit the included `paperless.conf` and adjust the settings to your
needs. Required settings for getting Paperless-ngx running are:
- [`PAPERLESS_REDIS`](configuration.md#PAPERLESS_REDIS) should point to your Redis server, such as
`redis://localhost:6379`.
- [`PAPERLESS_DBENGINE`](configuration.md#PAPERLESS_DBENGINE) is optional, and should be one of `postgres`,
`mariadb`, or `sqlite`
- [`PAPERLESS_DBHOST`](configuration.md#PAPERLESS_DBHOST) should be the hostname on which your
PostgreSQL server is running. Do not configure this to use
SQLite instead. Also configure port, database name, user and
password as necessary.
- [`PAPERLESS_CONSUMPTION_DIR`](configuration.md#PAPERLESS_CONSUMPTION_DIR) should point to the folder
that Paperless-ngx should watch for incoming documents.
Likewise, [`PAPERLESS_DATA_DIR`](configuration.md#PAPERLESS_DATA_DIR) and
[`PAPERLESS_MEDIA_ROOT`](configuration.md#PAPERLESS_MEDIA_ROOT) define where Paperless-ngx stores its data.
If needed, these can point to the same directory.
- [`PAPERLESS_SECRET_KEY`](configuration.md#PAPERLESS_SECRET_KEY) should be a random sequence of
characters. It's used for authentication. Failure to do so
allows third parties to forge authentication credentials.
- Set [`PAPERLESS_URL`](configuration.md#PAPERLESS_URL) if you are behind a reverse proxy. This should
point to your domain. Please see
[configuration](configuration.md) for more
information.
- [`PAPERLESS_REDIS`](configuration.md#PAPERLESS_REDIS) should point to your Redis server, such as
`redis://localhost:6379`.
- [`PAPERLESS_DBENGINE`](configuration.md#PAPERLESS_DBENGINE) is optional, and should be one of `postgres`,
`mariadb`, or `sqlite`
- [`PAPERLESS_DBHOST`](configuration.md#PAPERLESS_DBHOST) should be the hostname on which your
PostgreSQL server is running. Do not configure this to use
SQLite instead. Also configure port, database name, user and
password as necessary.
- [`PAPERLESS_CONSUMPTION_DIR`](configuration.md#PAPERLESS_CONSUMPTION_DIR) should point to the folder
that Paperless-ngx should watch for incoming documents.
Likewise, [`PAPERLESS_DATA_DIR`](configuration.md#PAPERLESS_DATA_DIR) and
[`PAPERLESS_MEDIA_ROOT`](configuration.md#PAPERLESS_MEDIA_ROOT) define where Paperless-ngx stores its data.
If needed, these can point to the same directory.
- [`PAPERLESS_SECRET_KEY`](configuration.md#PAPERLESS_SECRET_KEY) should be a random sequence of
characters. It's used for authentication. Failure to do so
allows third parties to forge authentication credentials.
- Set [`PAPERLESS_URL`](configuration.md#PAPERLESS_URL) if you are behind a reverse proxy. This should
point to your domain. Please see
[configuration](configuration.md) for more
information.
You can make many more adjustments, especially for OCR.
The following options are recommended for most users:
- Set [`PAPERLESS_OCR_LANGUAGE`](configuration.md#PAPERLESS_OCR_LANGUAGE) to the language most of your
documents are written in.
- Set [`PAPERLESS_TIME_ZONE`](configuration.md#PAPERLESS_TIME_ZONE) to your local time zone.
- Set [`PAPERLESS_OCR_LANGUAGE`](configuration.md#PAPERLESS_OCR_LANGUAGE) to the language most of your
documents are written in.
- Set [`PAPERLESS_TIME_ZONE`](configuration.md#PAPERLESS_TIME_ZONE) to your local time zone.
!!! warning
Ensure your Redis instance [is secured](https://redis.io/docs/latest/operate/oss_and_stack/management/security/).
7. Create the following directories if they do not already exist:
- `/opt/paperless/media`
- `/opt/paperless/data`
- `/opt/paperless/consume`
- `/opt/paperless/media`
- `/opt/paperless/data`
- `/opt/paperless/consume`
Adjust these paths if you configured different folders.
Then verify that the `paperless` user has write permissions:
@@ -391,11 +384,10 @@ to enable polling and disable inotify. See [here](configuration.md#polling).
starting point.
Paperless needs:
- The `webserver` script to run the webserver.
- The `consumer` script to watch the input folder.
- The `taskqueue` script for background workers (document consumption, etc.).
- The `scheduler` script for periodic tasks such as email checking.
- The `webserver` script to run the webserver.
- The `consumer` script to watch the input folder.
- The `taskqueue` script for background workers (document consumption, etc.).
- The `scheduler` script for periodic tasks such as email checking.
!!! note
@@ -501,19 +493,19 @@ your setup depending on how you installed Paperless.
This section describes how to update an existing Paperless Docker
installation. Keep these points in mind:
- Read the [changelog](changelog.md) and
take note of breaking changes.
- Decide whether to stay on SQLite or migrate to PostgreSQL.
Both work fine with Paperless-ngx.
However, if you already have a database server running
for other services, you might as well use it for Paperless as well.
- The task scheduler of Paperless, which is used to execute periodic
tasks such as email checking and maintenance, requires a
[Redis](https://redis.io/) message broker instance. The
Docker Compose route takes care of that.
- The layout of the folder structure for your documents and data
remains the same, so you can plug your old Docker volumes into
paperless-ngx and expect it to find everything where it should be.
- Read the [changelog](changelog.md) and
take note of breaking changes.
- Decide whether to stay on SQLite or migrate to PostgreSQL.
Both work fine with Paperless-ngx.
However, if you already have a database server running
for other services, you might as well use it for Paperless as well.
- The task scheduler of Paperless, which is used to execute periodic
tasks such as email checking and maintenance, requires a
[Redis](https://redis.io/) message broker instance. The
Docker Compose route takes care of that.
- The layout of the folder structure for your documents and data
remains the same, so you can plug your old Docker volumes into
paperless-ngx and expect it to find everything where it should be.
Migration to Paperless-ngx is then performed in a few simple steps:
@@ -598,7 +590,6 @@ commands as well.
1. Stop and remove the Paperless container.
2. If using an external database, stop that container.
3. Update Redis configuration.
1. If `REDIS_URL` is already set, change it to [`PAPERLESS_REDIS`](configuration.md#PAPERLESS_REDIS)
and continue to step 4.
@@ -610,22 +601,18 @@ commands as well.
the new Redis container.
4. Update user mapping.
1. If set, change the environment variable `PUID` to `USERMAP_UID`.
1. If set, change the environment variable `PGID` to `USERMAP_GID`.
5. Update configuration paths.
1. Set the environment variable [`PAPERLESS_DATA_DIR`](configuration.md#PAPERLESS_DATA_DIR) to `/config`.
6. Update media paths.
1. Set the environment variable [`PAPERLESS_MEDIA_ROOT`](configuration.md#PAPERLESS_MEDIA_ROOT) to
`/data/media`.
7. Update timezone.
1. Set the environment variable [`PAPERLESS_TIME_ZONE`](configuration.md#PAPERLESS_TIME_ZONE) to the same
value as `TZ`.
@@ -639,33 +626,33 @@ commands as well.
Paperless runs on Raspberry Pi. Some tasks can be slow on lower-powered
hardware, but a few settings can improve performance:
- Stick with SQLite to save some resources. See [troubleshooting](troubleshooting.md#log-reports-creating-paperlesstask-failed)
if you encounter issues with SQLite locking.
- If you do not need the filesystem-based consumer, consider disabling it
entirely by setting [`PAPERLESS_CONSUMER_DISABLE`](configuration.md#PAPERLESS_CONSUMER_DISABLE) to `true`.
- Consider setting [`PAPERLESS_OCR_PAGES`](configuration.md#PAPERLESS_OCR_PAGES) to 1, so that Paperless
OCRs only the first page of your documents. In most cases, this page
contains enough information to be able to find it.
- [`PAPERLESS_TASK_WORKERS`](configuration.md#PAPERLESS_TASK_WORKERS) and [`PAPERLESS_THREADS_PER_WORKER`](configuration.md#PAPERLESS_THREADS_PER_WORKER) are
configured to use all cores. The Raspberry Pi models 3 and up have 4
cores, meaning that Paperless will use 2 workers and 2 threads per
worker. This may result in sluggish response times during
consumption, so you might want to lower these settings (example: 2
workers and 1 thread to always have some computing power left for
other tasks).
- Keep [`PAPERLESS_OCR_MODE`](configuration.md#PAPERLESS_OCR_MODE) at its default value `skip` and consider
OCRing your documents before feeding them into Paperless. Some
scanners are able to do this!
- Set [`PAPERLESS_OCR_SKIP_ARCHIVE_FILE`](configuration.md#PAPERLESS_OCR_SKIP_ARCHIVE_FILE) to `with_text` to skip archive
file generation for already OCRed documents, or `always` to skip it
for all documents.
- If you want to perform OCR on the device, consider using
`PAPERLESS_OCR_CLEAN=none`. This will speed up OCR times and use
less memory at the expense of slightly worse OCR results.
- If using Docker, consider setting [`PAPERLESS_WEBSERVER_WORKERS`](configuration.md#PAPERLESS_WEBSERVER_WORKERS) to 1. This will save some memory.
- Consider setting [`PAPERLESS_ENABLE_NLTK`](configuration.md#PAPERLESS_ENABLE_NLTK) to false, to disable the
more advanced language processing, which can take more memory and
processing time.
- Stick with SQLite to save some resources. See [troubleshooting](troubleshooting.md#log-reports-creating-paperlesstask-failed)
if you encounter issues with SQLite locking.
- If you do not need the filesystem-based consumer, consider disabling it
entirely by setting [`PAPERLESS_CONSUMER_DISABLE`](configuration.md#PAPERLESS_CONSUMER_DISABLE) to `true`.
- Consider setting [`PAPERLESS_OCR_PAGES`](configuration.md#PAPERLESS_OCR_PAGES) to 1, so that Paperless
OCRs only the first page of your documents. In most cases, this page
contains enough information to be able to find it.
- [`PAPERLESS_TASK_WORKERS`](configuration.md#PAPERLESS_TASK_WORKERS) and [`PAPERLESS_THREADS_PER_WORKER`](configuration.md#PAPERLESS_THREADS_PER_WORKER) are
configured to use all cores. The Raspberry Pi models 3 and up have 4
cores, meaning that Paperless will use 2 workers and 2 threads per
worker. This may result in sluggish response times during
consumption, so you might want to lower these settings (example: 2
workers and 1 thread to always have some computing power left for
other tasks).
- Keep [`PAPERLESS_OCR_MODE`](configuration.md#PAPERLESS_OCR_MODE) at its default value `skip` and consider
OCRing your documents before feeding them into Paperless. Some
scanners are able to do this!
- Set [`PAPERLESS_OCR_SKIP_ARCHIVE_FILE`](configuration.md#PAPERLESS_OCR_SKIP_ARCHIVE_FILE) to `with_text` to skip archive
file generation for already OCRed documents, or `always` to skip it
for all documents.
- If you want to perform OCR on the device, consider using
`PAPERLESS_OCR_CLEAN=none`. This will speed up OCR times and use
less memory at the expense of slightly worse OCR results.
- If using Docker, consider setting [`PAPERLESS_WEBSERVER_WORKERS`](configuration.md#PAPERLESS_WEBSERVER_WORKERS) to 1. This will save some memory.
- Consider setting [`PAPERLESS_ENABLE_NLTK`](configuration.md#PAPERLESS_ENABLE_NLTK) to false, to disable the
more advanced language processing, which can take more memory and
processing time.
For details, refer to [configuration](configuration.md).

View File

@@ -4,27 +4,27 @@
Check for the following issues:
- Ensure that the directory you're putting your documents in is the
folder paperless is watching. With docker, this setting is performed
in the `docker-compose.yml` file. Without Docker, look at the
`CONSUMPTION_DIR` setting. Don't adjust this setting if you're
using docker.
- Ensure that the directory you're putting your documents in is the
folder paperless is watching. With docker, this setting is performed
in the `docker-compose.yml` file. Without Docker, look at the
`CONSUMPTION_DIR` setting. Don't adjust this setting if you're
using docker.
- Ensure that redis is up and running. Paperless does its task
processing asynchronously, and for documents to arrive at the task
processor, it needs redis to run.
- Ensure that redis is up and running. Paperless does its task
processing asynchronously, and for documents to arrive at the task
processor, it needs redis to run.
- Ensure that the task processor is running. Docker does this
automatically. Manually invoke the task processor by executing
- Ensure that the task processor is running. Docker does this
automatically. Manually invoke the task processor by executing
```shell-session
celery --app paperless worker
```
```shell-session
celery --app paperless worker
```
- Look at the output of paperless and inspect it for any errors.
- Look at the output of paperless and inspect it for any errors.
- Go to the admin interface, and check if there are failed tasks. If
so, the tasks will contain an error message.
- Go to the admin interface, and check if there are failed tasks. If
so, the tasks will contain an error message.
## Consumer warns `OCR for XX failed`
@@ -78,12 +78,12 @@ Ensure that `chown` is possible on these directories.
This indicates that the Auto matching algorithm found no documents to
learn from. This may have two reasons:
- You don't use the Auto matching algorithm: The error can be safely
ignored in this case.
- You are using the Auto matching algorithm: The classifier explicitly
excludes documents with Inbox tags. Verify that there are documents
in your archive without inbox tags. The algorithm will only learn
from documents not in your inbox.
- You don't use the Auto matching algorithm: The error can be safely
ignored in this case.
- You are using the Auto matching algorithm: The classifier explicitly
excludes documents with Inbox tags. Verify that there are documents
in your archive without inbox tags. The algorithm will only learn
from documents not in your inbox.
## UserWarning in sklearn on every single document
@@ -127,10 +127,10 @@ change in the `docker-compose.yml` file:
# The gotenberg chromium route is used to convert .eml files. We do not
# want to allow external content like tracking pixels or even javascript.
command:
- 'gotenberg'
- '--chromium-disable-javascript=true'
- '--chromium-allow-list=file:///tmp/.*'
- '--api-timeout=60s'
- 'gotenberg'
- '--chromium-disable-javascript=true'
- '--chromium-allow-list=file:///tmp/.*'
- '--api-timeout=60s'
```
## Permission denied errors in the consumption directory

View File

@@ -14,42 +14,42 @@ for finding and managing your documents.
Paperless essentially consists of two different parts for managing your
documents:
- The _consumer_ watches a specified folder and adds all documents in
that folder to paperless.
- The _web server_ (web UI) provides a UI that you use to manage and
search documents.
- The _consumer_ watches a specified folder and adds all documents in
that folder to paperless.
- The _web server_ (web UI) provides a UI that you use to manage and
search documents.
Each document has data fields that you can assign to them:
- A _Document_ is a piece of paper that sometimes contains valuable
information.
- The _correspondent_ of a document is the person, institution or
company that a document either originates from, or is sent to.
- A _tag_ is a label that you can assign to documents. Think of labels
as more powerful folders: Multiple documents can be grouped together
with a single tag, however, a single document can also have multiple
tags. This is not possible with folders. The reason folders are not
implemented in paperless is simply that tags are much more versatile
than folders.
- A _document type_ is used to demarcate the type of a document such
as letter, bank statement, invoice, contract, etc. It is used to
identify what a document is about.
- The document _storage path_ is the location where the document files
are stored. See [Storage Paths](advanced_usage.md#storage-paths) for
more information.
- The _date added_ of a document is the date the document was scanned
into paperless. You cannot and should not change this date.
- The _date created_ of a document is the date the document was
initially issued. This can be the date you bought a product, the
date you signed a contract, or the date a letter was sent to you.
- The _archive serial number_ (short: ASN) of a document is the
identifier of the document in your physical document binders. See
[recommended workflow](#usage-recommended-workflow) below.
- The _content_ of a document is the text that was OCR'ed from the
document. This text is fed into the search engine and is used for
matching tags, correspondents and document types.
- Paperless-ngx also supports _custom fields_ which can be used to
store additional metadata about a document.
- A _Document_ is a piece of paper that sometimes contains valuable
information.
- The _correspondent_ of a document is the person, institution or
company that a document either originates from, or is sent to.
- A _tag_ is a label that you can assign to documents. Think of labels
as more powerful folders: Multiple documents can be grouped together
with a single tag, however, a single document can also have multiple
tags. This is not possible with folders. The reason folders are not
implemented in paperless is simply that tags are much more versatile
than folders.
- A _document type_ is used to demarcate the type of a document such
as letter, bank statement, invoice, contract, etc. It is used to
identify what a document is about.
- The document _storage path_ is the location where the document files
are stored. See [Storage Paths](advanced_usage.md#storage-paths) for
more information.
- The _date added_ of a document is the date the document was scanned
into paperless. You cannot and should not change this date.
- The _date created_ of a document is the date the document was
initially issued. This can be the date you bought a product, the
date you signed a contract, or the date a letter was sent to you.
- The _archive serial number_ (short: ASN) of a document is the
identifier of the document in your physical document binders. See
[recommended workflow](#usage-recommended-workflow) below.
- The _content_ of a document is the text that was OCR'ed from the
document. This text is fed into the search engine and is used for
matching tags, correspondents and document types.
- Paperless-ngx also supports _custom fields_ which can be used to
store additional metadata about a document.
## The Web UI
@@ -93,12 +93,12 @@ download the document or share it via a share link.
Think of versions as **file history** for a document.
- Versions track the underlying file and extracted text content (OCR/text).
- Metadata such as tags, correspondent, document type, storage path and custom fields stay on the "root" document.
- Version files follow normal filename formatting (including storage paths) and add a `_vN` suffix (for example `_v1`, `_v2`).
- By default, search and document content use the latest version.
- In document detail, selecting a version switches the preview, file metadata and content (and download etc buttons) to that version.
- Deleting a non-root version keeps metadata and falls back to the latest remaining version.
- Versions track the underlying file and extracted text content (OCR/text).
- Metadata such as tags, correspondent, document type, storage path and custom fields stay on the "root" document.
- Version files follow normal filename formatting (including storage paths) and add a `_vN` suffix (for example `_v1`, `_v2`).
- By default, search and document content use the latest version.
- In document detail, selecting a version switches the preview, file metadata and content (and download etc buttons) to that version.
- Deleting a non-root version keeps metadata and falls back to the latest remaining version.
### Management Lists
@@ -218,21 +218,20 @@ patterns can include wildcards and multiple patterns separated by a comma.
The actions all ensure that the same mail is not consumed twice by
different means. These are as follows:
- **Delete:** Immediately deletes mail that paperless has consumed
documents from. Use with caution.
- **Mark as read:** Mark consumed mail as read. Paperless will not
consume documents from already read mails. If you read a mail before
paperless sees it, it will be ignored.
- **Flag:** Sets the 'important' flag on mails with consumed
documents. Paperless will not consume flagged mails.
- **Move to folder:** Moves consumed mails out of the way so that
paperless won't consume them again.
- **Add custom Tag:** Adds a custom tag to mails with consumed
documents (the IMAP standard calls these "keywords"). Paperless
will not consume mails already tagged. Not all mail servers support
this feature!
- **Apple Mail support:** Apple Mail clients allow differently colored tags. For this to work use `apple:<color>` (e.g. _apple:green_) as a custom tag. Available colors are _red_, _orange_, _yellow_, _blue_, _green_, _violet_ and _grey_.
- **Delete:** Immediately deletes mail that paperless has consumed
documents from. Use with caution.
- **Mark as read:** Mark consumed mail as read. Paperless will not
consume documents from already read mails. If you read a mail before
paperless sees it, it will be ignored.
- **Flag:** Sets the 'important' flag on mails with consumed
documents. Paperless will not consume flagged mails.
- **Move to folder:** Moves consumed mails out of the way so that
paperless won't consume them again.
- **Add custom Tag:** Adds a custom tag to mails with consumed
documents (the IMAP standard calls these "keywords"). Paperless
will not consume mails already tagged. Not all mail servers support
this feature!
- **Apple Mail support:** Apple Mail clients allow differently colored tags. For this to work use `apple:<color>` (e.g. _apple:green_) as a custom tag. Available colors are _red_, _orange_, _yellow_, _blue_, _green_, _violet_ and _grey_.
!!! warning
@@ -325,12 +324,12 @@ or using [email](#workflow-action-email) or [webhook](#workflow-action-webhook)
"Share links" are public links to files (or an archive of files) and can be created and managed under the 'Send' button on the document detail screen or from the bulk editor.
- Share links do not require a user to login and thus link directly to a file or bundled download.
- Links are unique and are of the form `{paperless-url}/share/{randomly-generated-slug}`.
- Links can optionally have an expiration time set.
- After a link expires or is deleted users will be redirected to the regular paperless-ngx login.
- From the document detail screen you can create a share link for that single document.
- From the bulk editor you can create a **share link bundle** for any selection. Paperless-ngx prepares a ZIP archive in the background and exposes a single share link. You can revisit the "Manage share link bundles" dialog to monitor progress, retry failed bundles, or delete links.
- Share links do not require a user to login and thus link directly to a file or bundled download.
- Links are unique and are of the form `{paperless-url}/share/{randomly-generated-slug}`.
- Links can optionally have an expiration time set.
- After a link expires or is deleted users will be redirected to the regular paperless-ngx login.
- From the document detail screen you can create a share link for that single document.
- From the bulk editor you can create a **share link bundle** for any selection. Paperless-ngx prepares a ZIP archive in the background and exposes a single share link. You can revisit the "Manage share link bundles" dialog to monitor progress, retry failed bundles, or delete links.
!!! tip
@@ -514,25 +513,25 @@ flowchart TD
Workflows allow you to filter by:
- Source, e.g. documents uploaded via consume folder, API (& the web UI) and mail fetch
- File name, including wildcards e.g. \*.pdf will apply to all pdfs.
- File path, including wildcards. Note that enabling `PAPERLESS_CONSUMER_RECURSIVE` would allow, for
example, automatically assigning documents to different owners based on the upload directory.
- Mail rule. Choosing this option will force 'mail fetch' to be the workflow source.
- Content matching (`Added`, `Updated` and `Scheduled` triggers only). Filter document content using the matching settings.
- Source, e.g. documents uploaded via consume folder, API (& the web UI) and mail fetch
- File name, including wildcards e.g. \*.pdf will apply to all pdfs.
- File path, including wildcards. Note that enabling `PAPERLESS_CONSUMER_RECURSIVE` would allow, for
example, automatically assigning documents to different owners based on the upload directory.
- Mail rule. Choosing this option will force 'mail fetch' to be the workflow source.
- Content matching (`Added`, `Updated` and `Scheduled` triggers only). Filter document content using the matching settings.
There are also 'advanced' filters available for `Added`, `Updated` and `Scheduled` triggers:
- Any Tags: Filter for documents with any of the specified tags.
- All Tags: Filter for documents with all of the specified tags.
- No Tags: Filter for documents with none of the specified tags.
- Document type: Filter documents with this document type.
- Not Document types: Filter documents without any of these document types.
- Correspondent: Filter documents with this correspondent.
- Not Correspondents: Filter documents without any of these correspondents.
- Storage path: Filter documents with this storage path.
- Not Storage paths: Filter documents without any of these storage paths.
- Custom field query: Filter documents with a custom field query (the same as used for the document list filters).
- Any Tags: Filter for documents with any of the specified tags.
- All Tags: Filter for documents with all of the specified tags.
- No Tags: Filter for documents with none of the specified tags.
- Document type: Filter documents with this document type.
- Not Document types: Filter documents without any of these document types.
- Correspondent: Filter documents with this correspondent.
- Not Correspondents: Filter documents without any of these correspondents.
- Storage path: Filter documents with this storage path.
- Not Storage paths: Filter documents without any of these storage paths.
- Custom field query: Filter documents with a custom field query (the same as used for the document list filters).
### Workflow Actions
@@ -544,37 +543,37 @@ The following workflow action types are available:
"Assignment" actions can assign:
- Title, see [workflow placeholders](usage.md#workflow-placeholders) below
- Tags, correspondent, document type and storage path
- Document owner
- View and / or edit permissions to users or groups
- Custom fields. Note that no value for the field will be set
- Title, see [workflow placeholders](usage.md#workflow-placeholders) below
- Tags, correspondent, document type and storage path
- Document owner
- View and / or edit permissions to users or groups
- Custom fields. Note that no value for the field will be set
##### Removal {#workflow-action-removal}
"Removal" actions can remove either all of or specific sets of the following:
- Tags, correspondents, document types or storage paths
- Document owner
- View and / or edit permissions
- Custom fields
- Tags, correspondents, document types or storage paths
- Document owner
- View and / or edit permissions
- Custom fields
##### Email {#workflow-action-email}
"Email" actions can send documents via email. This action requires a mail server to be [configured](configuration.md#email-sending). You can specify:
- The recipient email address(es) separated by commas
- The subject and body of the email, which can include placeholders, see [placeholders](usage.md#workflow-placeholders) below
- Whether to include the document as an attachment
- The recipient email address(es) separated by commas
- The subject and body of the email, which can include placeholders, see [placeholders](usage.md#workflow-placeholders) below
- Whether to include the document as an attachment
##### Webhook {#workflow-action-webhook}
"Webhook" actions send a POST request to a specified URL. You can specify:
- The URL to send the request to
- The request body as text or as key-value pairs, which can include placeholders, see [placeholders](usage.md#workflow-placeholders) below.
- Encoding for the request body, either JSON or form data
- The request headers as key-value pairs
- The URL to send the request to
- The request body as text or as key-value pairs, which can include placeholders, see [placeholders](usage.md#workflow-placeholders) below.
- Encoding for the request body, either JSON or form data
- The request headers as key-value pairs
For security reasons, webhooks can be limited to specific ports and disallowed from connecting to local URLs. See the relevant
[configuration settings](configuration.md#workflow-webhooks) to change this behavior. If you are allowing non-admins to create workflows,
@@ -605,33 +604,33 @@ The available inputs differ depending on the type of workflow trigger.
This is because at the time of consumption (when the text is to be set), no automatic tags etc. have been
applied. You can use the following placeholders in the template with any trigger type:
- `{{correspondent}}`: assigned correspondent name
- `{{document_type}}`: assigned document type name
- `{{owner_username}}`: assigned owner username
- `{{added}}`: added datetime
- `{{added_year}}`: added year
- `{{added_year_short}}`: added year
- `{{added_month}}`: added month
- `{{added_month_name}}`: added month name
- `{{added_month_name_short}}`: added month short name
- `{{added_day}}`: added day
- `{{added_time}}`: added time in HH:MM format
- `{{original_filename}}`: original file name without extension
- `{{filename}}`: current file name without extension (for "added" workflows this may not be final yet, you can use `{{original_filename}}`)
- `{{doc_title}}`: current document title (cannot be used in title assignment)
- `{{correspondent}}`: assigned correspondent name
- `{{document_type}}`: assigned document type name
- `{{owner_username}}`: assigned owner username
- `{{added}}`: added datetime
- `{{added_year}}`: added year
- `{{added_year_short}}`: added year
- `{{added_month}}`: added month
- `{{added_month_name}}`: added month name
- `{{added_month_name_short}}`: added month short name
- `{{added_day}}`: added day
- `{{added_time}}`: added time in HH:MM format
- `{{original_filename}}`: original file name without extension
- `{{filename}}`: current file name without extension (for "added" workflows this may not be final yet, you can use `{{original_filename}}`)
- `{{doc_title}}`: current document title (cannot be used in title assignment)
The following placeholders are only available for "added" or "updated" triggers
- `{{created}}`: created datetime
- `{{created_year}}`: created year
- `{{created_year_short}}`: created year
- `{{created_month}}`: created month
- `{{created_month_name}}`: created month name
- `{{created_month_name_short}}`: created month short name
- `{{created_day}}`: created day
- `{{created_time}}`: created time in HH:MM format
- `{{doc_url}}`: URL to the document in the web UI. Requires the `PAPERLESS_URL` setting to be set.
- `{{doc_id}}`: Document ID
- `{{created}}`: created datetime
- `{{created_year}}`: created year
- `{{created_year_short}}`: created year
- `{{created_month}}`: created month
- `{{created_month_name}}`: created month name
- `{{created_month_name_short}}`: created month short name
- `{{created_day}}`: created day
- `{{created_time}}`: created time in HH:MM format
- `{{doc_url}}`: URL to the document in the web UI. Requires the `PAPERLESS_URL` setting to be set.
- `{{doc_id}}`: Document ID
##### Examples
@@ -676,26 +675,26 @@ Multiple fields may be attached to a document but the same field name cannot be
The following custom field types are supported:
- `Text`: any text
- `Boolean`: true / false (check / unchecked) field
- `Date`: date
- `URL`: a valid url
- `Integer`: integer number e.g. 12
- `Number`: float number e.g. 12.3456
- `Monetary`: [ISO 4217 currency code](https://en.wikipedia.org/wiki/ISO_4217#List_of_ISO_4217_currency_codes) and a number with exactly two decimals, e.g. USD12.30
- `Document Link`: reference(s) to other document(s) displayed as links, automatically creates a symmetrical link in reverse
- `Select`: a pre-defined list of strings from which the user can choose
- `Text`: any text
- `Boolean`: true / false (check / unchecked) field
- `Date`: date
- `URL`: a valid url
- `Integer`: integer number e.g. 12
- `Number`: float number e.g. 12.3456
- `Monetary`: [ISO 4217 currency code](https://en.wikipedia.org/wiki/ISO_4217#List_of_ISO_4217_currency_codes) and a number with exactly two decimals, e.g. USD12.30
- `Document Link`: reference(s) to other document(s) displayed as links, automatically creates a symmetrical link in reverse
- `Select`: a pre-defined list of strings from which the user can choose
## PDF Actions
Paperless-ngx supports basic editing operations for PDFs (these operations currently cannot be performed on non-PDF files). When viewing an individual document you can
open the 'PDF Editor' to use a simple UI for re-arranging, rotating, deleting pages and splitting documents.
- Merging documents: available when selecting multiple documents for 'bulk editing'.
- Rotating documents: available when selecting multiple documents for 'bulk editing' and via the pdf editor on an individual document's details page.
- Splitting documents: via the pdf editor on an individual document's details page.
- Deleting pages: via the pdf editor on an individual document's details page.
- Re-arranging pages: via the pdf editor on an individual document's details page.
- Merging documents: available when selecting multiple documents for 'bulk editing'.
- Rotating documents: available when selecting multiple documents for 'bulk editing' and via the pdf editor on an individual document's details page.
- Splitting documents: via the pdf editor on an individual document's details page.
- Deleting pages: via the pdf editor on an individual document's details page.
- Re-arranging pages: via the pdf editor on an individual document's details page.
!!! important
@@ -773,18 +772,18 @@ the system.
Here are a couple examples of tags and types that you could use in your
collection.
- An `inbox` tag for newly added documents that you haven't manually
edited yet.
- A tag `car` for everything car related (repairs, registration,
insurance, etc)
- A tag `todo` for documents that you still need to do something with,
such as reply, or perform some task online.
- A tag `bank account x` for all bank statement related to that
account.
- A tag `mail` for anything that you added to paperless via its mail
processing capabilities.
- A tag `missing_metadata` when you still need to add some metadata to
a document, but can't or don't want to do this right now.
- An `inbox` tag for newly added documents that you haven't manually
edited yet.
- A tag `car` for everything car related (repairs, registration,
insurance, etc)
- A tag `todo` for documents that you still need to do something with,
such as reply, or perform some task online.
- A tag `bank account x` for all bank statement related to that
account.
- A tag `mail` for anything that you added to paperless via its mail
processing capabilities.
- A tag `missing_metadata` when you still need to add some metadata to
a document, but can't or don't want to do this right now.
## Searching {#basic-usage_searching}
@@ -873,8 +872,8 @@ The following diagram shows how easy it is to manage your documents.
### Preparations in paperless
- Create an inbox tag that gets assigned to all new documents.
- Create a TODO tag.
- Create an inbox tag that gets assigned to all new documents.
- Create a TODO tag.
### Processing of the physical documents
@@ -948,15 +947,15 @@ Some documents require attention and require you to act on the document.
You may take two different approaches to handle these documents based on
how regularly you intend to scan documents and use paperless.
- If you scan and process your documents in paperless regularly,
assign a TODO tag to all scanned documents that you need to process.
Create a saved view on the dashboard that shows all documents with
this tag.
- If you do not scan documents regularly and use paperless solely for
archiving, create a physical todo box next to your physical inbox
and put documents you need to process in the TODO box. When you
performed the task associated with the document, move it to the
inbox.
- If you scan and process your documents in paperless regularly,
assign a TODO tag to all scanned documents that you need to process.
Create a saved view on the dashboard that shows all documents with
this tag.
- If you do not scan documents regularly and use paperless solely for
archiving, create a physical todo box next to your physical inbox
and put documents you need to process in the TODO box. When you
performed the task associated with the document, move it to the
inbox.
## Remote OCR
@@ -977,64 +976,63 @@ or page limitations (e.g. with a free tier).
Paperless-ngx consists of the following components:
- **The webserver:** This serves the administration pages, the API,
and the new frontend. This is the main tool you'll be using to interact
with paperless. You may start the webserver directly with
- **The webserver:** This serves the administration pages, the API,
and the new frontend. This is the main tool you'll be using to interact
with paperless. You may start the webserver directly with
```shell-session
cd /path/to/paperless/src/
granian --interface asginl --ws "paperless.asgi:application"
```
```shell-session
cd /path/to/paperless/src/
granian --interface asginl --ws "paperless.asgi:application"
```
or by any other means such as Apache `mod_wsgi`.
or by any other means such as Apache `mod_wsgi`.
- **The consumer:** This is what watches your consumption folder for
documents. However, the consumer itself does not really consume your
documents. Now it notifies a task processor that a new file is ready
for consumption. I suppose it should be named differently. This was
also used to check your emails, but that's now done elsewhere as
well.
- **The consumer:** This is what watches your consumption folder for
documents. However, the consumer itself does not really consume your
documents. Now it notifies a task processor that a new file is ready
for consumption. I suppose it should be named differently. This was
also used to check your emails, but that's now done elsewhere as
well.
Start the consumer with the management command `document_consumer`:
Start the consumer with the management command `document_consumer`:
```shell-session
cd /path/to/paperless/src/
python3 manage.py document_consumer
```
```shell-session
cd /path/to/paperless/src/
python3 manage.py document_consumer
```
- **The task processor:** Paperless relies on [Celery - Distributed
Task Queue](https://docs.celeryq.dev/en/stable/index.html) for doing
most of the heavy lifting. This is a task queue that accepts tasks
from multiple sources and processes these in parallel. It also comes
with a scheduler that executes certain commands periodically.
- **The task processor:** Paperless relies on [Celery - Distributed
Task Queue](https://docs.celeryq.dev/en/stable/index.html) for doing
most of the heavy lifting. This is a task queue that accepts tasks
from multiple sources and processes these in parallel. It also comes
with a scheduler that executes certain commands periodically.
This task processor is responsible for:
This task processor is responsible for:
- Consuming documents. When the consumer finds new documents, it
notifies the task processor to start a consumption task.
- The task processor also performs the consumption of any
documents you upload through the web interface.
- Consuming emails. It periodically checks your configured
accounts for new emails and notifies the task processor to
consume the attachment of an email.
- Maintaining the search index and the automatic matching
algorithm. These are things that paperless needs to do from time
to time in order to operate properly.
- Consuming documents. When the consumer finds new documents, it
notifies the task processor to start a consumption task.
- The task processor also performs the consumption of any
documents you upload through the web interface.
- Consuming emails. It periodically checks your configured
accounts for new emails and notifies the task processor to
consume the attachment of an email.
- Maintaining the search index and the automatic matching
algorithm. These are things that paperless needs to do from time
to time in order to operate properly.
This allows paperless to process multiple documents from your
consumption folder in parallel! On a modern multi core system, this
makes the consumption process with full OCR blazingly fast.
This allows paperless to process multiple documents from your
consumption folder in parallel! On a modern multi core system, this
makes the consumption process with full OCR blazingly fast.
The task processor comes with a built-in admin interface that you
can use to check whenever any of the tasks fail and inspect the
errors (i.e., wrong email credentials, errors during consuming a
specific file, etc).
The task processor comes with a built-in admin interface that you
can use to check whenever any of the tasks fail and inspect the
errors (i.e., wrong email credentials, errors during consuming a
specific file, etc).
- A [redis](https://redis.io/) message broker: This is a really
lightweight service that is responsible for getting the tasks from
the webserver and the consumer to the task scheduler. These run in a
different process (maybe even on different machines!), and
therefore, this is necessary.
- A [redis](https://redis.io/) message broker: This is a really
lightweight service that is responsible for getting the tasks from
the webserver and the consumer to the task scheduler. These run in a
different process (maybe even on different machines!), and
therefore, this is necessary.
- Optional: A database server. Paperless supports PostgreSQL, MariaDB
and SQLite for storing its data.
- Optional: A database server. Paperless supports PostgreSQL, MariaDB
and SQLite for storing its data.

View File

@@ -42,13 +42,14 @@ dependencies = [
"djangorestframework~=3.16",
"djangorestframework-guardian~=0.4.0",
"drf-spectacular~=0.28",
"drf-spectacular-sidecar~=2026.1.1",
"drf-spectacular-sidecar~=2026.3.1",
"drf-writable-nested~=0.7.1",
"faiss-cpu>=1.10",
"filelock~=3.24.3",
"filelock~=3.25.2",
"flower~=2.0.1",
"gotenberg-client~=0.13.1",
"httpx-oauth~=0.16",
"ijson>=3.2",
"imap-tools~=1.11.0",
"jinja2~=3.1.5",
"langdetect~=1.0.9",
@@ -59,7 +60,7 @@ dependencies = [
"llama-index-llms-openai>=0.6.13",
"llama-index-vector-stores-faiss>=0.5.2",
"nltk~=3.9.1",
"ocrmypdf~=16.13.0",
"ocrmypdf~=17.3.0",
"openai>=1.76",
"pathvalidate~=3.3.1",
"pdf2image~=1.17.0",
@@ -71,7 +72,7 @@ dependencies = [
"rapidfuzz~=3.14.0",
"redis[hiredis]~=5.2.1",
"regex>=2025.9.18",
"scikit-learn~=1.7.0",
"scikit-learn~=1.8.0",
"sentence-transformers>=4.1",
"setproctitle~=1.3.4",
"tika-client~=0.10.0",
@@ -110,7 +111,7 @@ docs = [
testing = [
"daphne",
"factory-boy~=3.3.1",
"faker~=40.5.1",
"faker~=40.8.0",
"imagehash",
"pytest~=9.0.0",
"pytest-cov~=7.0.0",

View File

@@ -19,6 +19,4 @@ following additional information about it:
* Correspondent: ${DOCUMENT_CORRESPONDENT}
* Tags: ${DOCUMENT_TAGS}
It was consumed with the passphrase ${PASSPHRASE}
"

View File

@@ -1217,7 +1217,7 @@
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1756</context>
<context context-type="linenumber">1758</context>
</context-group>
</trans-unit>
<trans-unit id="1577733187050997705" datatype="html">
@@ -2090,7 +2090,7 @@
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">634</context>
<context context-type="linenumber">637</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-version-dropdown/document-version-dropdown.component.html</context>
@@ -2798,23 +2798,23 @@
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1376</context>
<context context-type="linenumber">1379</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1757</context>
<context context-type="linenumber">1759</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">802</context>
<context context-type="linenumber">833</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">835</context>
<context context-type="linenumber">871</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">854</context>
<context context-type="linenumber">894</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/manage/document-attributes/custom-fields/custom-fields.component.ts</context>
@@ -3400,31 +3400,31 @@
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1329</context>
<context context-type="linenumber">1332</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">445</context>
<context context-type="linenumber">470</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">485</context>
<context context-type="linenumber">510</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">523</context>
<context context-type="linenumber">548</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">561</context>
<context context-type="linenumber">586</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">623</context>
<context context-type="linenumber">648</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">756</context>
<context context-type="linenumber">781</context>
</context-group>
</trans-unit>
<trans-unit id="994016933065248559" datatype="html">
@@ -3434,39 +3434,46 @@
<context context-type="linenumber">9</context>
</context-group>
</trans-unit>
<trans-unit id="6705735915615634619" datatype="html">
<source>{VAR_PLURAL, plural, =1 {One page} other {<x id="INTERPOLATION"/> pages}}</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/common/confirm-dialog/merge-confirm-dialog/merge-confirm-dialog.component.html</context>
<context context-type="linenumber">25</context>
</context-group>
</trans-unit>
<trans-unit id="7508164375697837821" datatype="html">
<source>Use metadata from:</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/common/confirm-dialog/merge-confirm-dialog/merge-confirm-dialog.component.html</context>
<context context-type="linenumber">22</context>
<context context-type="linenumber">34</context>
</context-group>
</trans-unit>
<trans-unit id="2020403212524346652" datatype="html">
<source>Regenerate all metadata</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/common/confirm-dialog/merge-confirm-dialog/merge-confirm-dialog.component.html</context>
<context context-type="linenumber">24</context>
<context context-type="linenumber">36</context>
</context-group>
</trans-unit>
<trans-unit id="2710430925353472741" datatype="html">
<source>Try to include archive version in merge for non-PDF files</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/common/confirm-dialog/merge-confirm-dialog/merge-confirm-dialog.component.html</context>
<context context-type="linenumber">32</context>
<context context-type="linenumber">44</context>
</context-group>
</trans-unit>
<trans-unit id="5612366187076076264" datatype="html">
<source>Delete original documents after successful merge</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/common/confirm-dialog/merge-confirm-dialog/merge-confirm-dialog.component.html</context>
<context context-type="linenumber">36</context>
<context context-type="linenumber">48</context>
</context-group>
</trans-unit>
<trans-unit id="5138283234724909648" datatype="html">
<source>Note that only PDFs will be included.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/common/confirm-dialog/merge-confirm-dialog/merge-confirm-dialog.component.html</context>
<context context-type="linenumber">39</context>
<context context-type="linenumber">51</context>
</context-group>
</trans-unit>
<trans-unit id="1309641780471803652" datatype="html">
@@ -3505,7 +3512,7 @@
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1808</context>
<context context-type="linenumber">1812</context>
</context-group>
</trans-unit>
<trans-unit id="6661109599266152398" datatype="html">
@@ -3516,7 +3523,7 @@
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1809</context>
<context context-type="linenumber">1813</context>
</context-group>
</trans-unit>
<trans-unit id="5162686434580248853" datatype="html">
@@ -3527,7 +3534,7 @@
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1810</context>
<context context-type="linenumber">1814</context>
</context-group>
</trans-unit>
<trans-unit id="8157388568390631653" datatype="html">
@@ -5488,11 +5495,11 @@
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1333</context>
<context context-type="linenumber">1336</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">760</context>
<context context-type="linenumber">785</context>
</context-group>
</trans-unit>
<trans-unit id="4522609911791833187" datatype="html">
@@ -7320,7 +7327,7 @@
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">390</context>
<context context-type="linenumber">415</context>
</context-group>
<note priority="1" from="description">this string is used to separate processing, failed and added on the file upload widget</note>
</trans-unit>
@@ -7695,81 +7702,81 @@
<source>Error retrieving metadata</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">408</context>
<context context-type="linenumber">411</context>
</context-group>
</trans-unit>
<trans-unit id="2218903673684131427" datatype="html">
<source>An error occurred loading content: <x id="PH" equiv-text="err.message ?? err.toString()"/></source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">509,511</context>
<context context-type="linenumber">512,514</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">956,958</context>
<context context-type="linenumber">959,961</context>
</context-group>
</trans-unit>
<trans-unit id="6357361810318120957" datatype="html">
<source>Document was updated</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">629</context>
<context context-type="linenumber">632</context>
</context-group>
</trans-unit>
<trans-unit id="5154064822428631306" datatype="html">
<source>Document was updated at <x id="PH" equiv-text="formattedModified"/>.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">630</context>
<context context-type="linenumber">633</context>
</context-group>
</trans-unit>
<trans-unit id="8462497568316256794" datatype="html">
<source>Reload to discard your local unsaved edits and load the latest remote version.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">631</context>
<context context-type="linenumber">634</context>
</context-group>
</trans-unit>
<trans-unit id="7967484035994732534" datatype="html">
<source>Reload</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">633</context>
<context context-type="linenumber">636</context>
</context-group>
</trans-unit>
<trans-unit id="2907037627372942104" datatype="html">
<source>Document reloaded with latest changes.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">689</context>
<context context-type="linenumber">692</context>
</context-group>
</trans-unit>
<trans-unit id="6435639868943916539" datatype="html">
<source>Document reloaded.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">700</context>
<context context-type="linenumber">703</context>
</context-group>
</trans-unit>
<trans-unit id="6142395741265832184" datatype="html">
<source>Next document</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">802</context>
<context context-type="linenumber">805</context>
</context-group>
</trans-unit>
<trans-unit id="651985345816518480" datatype="html">
<source>Previous document</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">812</context>
<context context-type="linenumber">815</context>
</context-group>
</trans-unit>
<trans-unit id="2885986061416655600" datatype="html">
<source>Close document</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">820</context>
<context context-type="linenumber">823</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/services/open-documents.service.ts</context>
@@ -7780,191 +7787,191 @@
<source>Save document</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">827</context>
<context context-type="linenumber">830</context>
</context-group>
</trans-unit>
<trans-unit id="1784543155727940353" datatype="html">
<source>Save and close / next</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">836</context>
<context context-type="linenumber">839</context>
</context-group>
</trans-unit>
<trans-unit id="7427704425579737895" datatype="html">
<source>Error retrieving version content</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">940</context>
<context context-type="linenumber">943</context>
</context-group>
</trans-unit>
<trans-unit id="3456881259945295697" datatype="html">
<source>Error retrieving suggestions.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">997</context>
<context context-type="linenumber">1000</context>
</context-group>
</trans-unit>
<trans-unit id="2194092841814123758" datatype="html">
<source>Document &quot;<x id="PH" equiv-text="newValues.title"/>&quot; saved successfully.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1209</context>
<context context-type="linenumber">1212</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1236</context>
<context context-type="linenumber">1239</context>
</context-group>
</trans-unit>
<trans-unit id="6626387786259219838" datatype="html">
<source>Error saving document &quot;<x id="PH" equiv-text="this.document.title"/>&quot;</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1242</context>
<context context-type="linenumber">1245</context>
</context-group>
</trans-unit>
<trans-unit id="448882439049417053" datatype="html">
<source>Error saving document</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1297</context>
<context context-type="linenumber">1300</context>
</context-group>
</trans-unit>
<trans-unit id="8410796510716511826" datatype="html">
<source>Do you really want to move the document &quot;<x id="PH" equiv-text="this.document.title"/>&quot; to the trash?</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1330</context>
<context context-type="linenumber">1333</context>
</context-group>
</trans-unit>
<trans-unit id="282586936710748252" datatype="html">
<source>Documents can be restored prior to permanent deletion.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1331</context>
<context context-type="linenumber">1334</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">758</context>
<context context-type="linenumber">783</context>
</context-group>
</trans-unit>
<trans-unit id="7295637485862454066" datatype="html">
<source>Error deleting document</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1352</context>
<context context-type="linenumber">1355</context>
</context-group>
</trans-unit>
<trans-unit id="619486176823357521" datatype="html">
<source>Reprocess confirm</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1372</context>
<context context-type="linenumber">1375</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">798</context>
<context context-type="linenumber">829</context>
</context-group>
</trans-unit>
<trans-unit id="2951161989614003846" datatype="html">
<source>This operation will permanently recreate the archive file for this document.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1373</context>
<context context-type="linenumber">1376</context>
</context-group>
</trans-unit>
<trans-unit id="302054111564709516" datatype="html">
<source>The archive file will be re-generated with the current settings.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1374</context>
<context context-type="linenumber">1377</context>
</context-group>
</trans-unit>
<trans-unit id="4700389117298802932" datatype="html">
<source>Reprocess operation for &quot;<x id="PH" equiv-text="this.document.title"/>&quot; will begin in the background.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1384</context>
<context context-type="linenumber">1385</context>
</context-group>
</trans-unit>
<trans-unit id="4409560272830824468" datatype="html">
<source>Error executing operation</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1395</context>
<context context-type="linenumber">1396</context>
</context-group>
</trans-unit>
<trans-unit id="6030453331794586802" datatype="html">
<source>Error downloading document</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1458</context>
<context context-type="linenumber">1459</context>
</context-group>
</trans-unit>
<trans-unit id="4458954481601077369" datatype="html">
<source>Page Fit</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1538</context>
<context context-type="linenumber">1539</context>
</context-group>
</trans-unit>
<trans-unit id="4663705961777238777" datatype="html">
<source>PDF edit operation for &quot;<x id="PH" equiv-text="this.document.title"/>&quot; will begin in the background.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1775</context>
<context context-type="linenumber">1779</context>
</context-group>
</trans-unit>
<trans-unit id="9043972994040261999" datatype="html">
<source>Error executing PDF edit operation</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1787</context>
<context context-type="linenumber">1791</context>
</context-group>
</trans-unit>
<trans-unit id="6172690334763056188" datatype="html">
<source>Please enter the current password before attempting to remove it.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1798</context>
<context context-type="linenumber">1802</context>
</context-group>
</trans-unit>
<trans-unit id="968660764814228922" datatype="html">
<source>Password removal operation for &quot;<x id="PH" equiv-text="this.document.title"/>&quot; will begin in the background.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1830</context>
<context context-type="linenumber">1836</context>
</context-group>
</trans-unit>
<trans-unit id="2282118435712883014" datatype="html">
<source>Error executing password removal operation</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1844</context>
<context context-type="linenumber">1850</context>
</context-group>
</trans-unit>
<trans-unit id="3740891324955700797" datatype="html">
<source>Print failed.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1883</context>
<context context-type="linenumber">1889</context>
</context-group>
</trans-unit>
<trans-unit id="6457245677384603573" datatype="html">
<source>Error loading document for printing.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1895</context>
<context context-type="linenumber">1901</context>
</context-group>
</trans-unit>
<trans-unit id="6085793215710522488" datatype="html">
<source>An error occurred loading tiff: <x id="PH" equiv-text="err.toString()"/></source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1960</context>
<context context-type="linenumber">1966</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1964</context>
<context context-type="linenumber">1970</context>
</context-group>
</trans-unit>
<trans-unit id="4958946940233632319" datatype="html">
@@ -8208,25 +8215,25 @@
<source>Error executing bulk operation</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">294</context>
<context context-type="linenumber">321</context>
</context-group>
</trans-unit>
<trans-unit id="7894972847287473517" datatype="html">
<source>&quot;<x id="PH" equiv-text="items[0].name"/>&quot;</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">382</context>
<context context-type="linenumber">407</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">388</context>
<context context-type="linenumber">413</context>
</context-group>
</trans-unit>
<trans-unit id="8639884465898458690" datatype="html">
<source>&quot;<x id="PH" equiv-text="items[0].name"/>&quot; and &quot;<x id="PH_1" equiv-text="items[1].name"/>&quot;</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">384</context>
<context context-type="linenumber">409</context>
</context-group>
<note priority="1" from="description">This is for messages like &apos;modify &quot;tag1&quot; and &quot;tag2&quot;&apos;</note>
</trans-unit>
@@ -8234,7 +8241,7 @@
<source><x id="PH" equiv-text="list"/> and &quot;<x id="PH_1" equiv-text="items[items.length - 1].name"/>&quot;</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">392,394</context>
<context context-type="linenumber">417,419</context>
</context-group>
<note priority="1" from="description">this is for messages like &apos;modify &quot;tag1&quot;, &quot;tag2&quot; and &quot;tag3&quot;&apos;</note>
</trans-unit>
@@ -8242,14 +8249,14 @@
<source>Confirm tags assignment</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">409</context>
<context context-type="linenumber">434</context>
</context-group>
</trans-unit>
<trans-unit id="6619516195038467207" datatype="html">
<source>This operation will add the tag &quot;<x id="PH" equiv-text="tag.name"/>&quot; to <x id="PH_1" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">415</context>
<context context-type="linenumber">440</context>
</context-group>
</trans-unit>
<trans-unit id="1894412783609570695" datatype="html">
@@ -8258,14 +8265,14 @@
)"/> to <x id="PH_1" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">420,422</context>
<context context-type="linenumber">445,447</context>
</context-group>
</trans-unit>
<trans-unit id="7181166515756808573" datatype="html">
<source>This operation will remove the tag &quot;<x id="PH" equiv-text="tag.name"/>&quot; from <x id="PH_1" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">428</context>
<context context-type="linenumber">453</context>
</context-group>
</trans-unit>
<trans-unit id="3819792277998068944" datatype="html">
@@ -8274,7 +8281,7 @@
)"/> from <x id="PH_1" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">433,435</context>
<context context-type="linenumber">458,460</context>
</context-group>
</trans-unit>
<trans-unit id="2739066218579571288" datatype="html">
@@ -8285,84 +8292,84 @@
)"/> on <x id="PH_2" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">437,441</context>
<context context-type="linenumber">462,466</context>
</context-group>
</trans-unit>
<trans-unit id="2996713129519325161" datatype="html">
<source>Confirm correspondent assignment</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">478</context>
<context context-type="linenumber">503</context>
</context-group>
</trans-unit>
<trans-unit id="6900893559485781849" datatype="html">
<source>This operation will assign the correspondent &quot;<x id="PH" equiv-text="correspondent.name"/>&quot; to <x id="PH_1" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">480</context>
<context context-type="linenumber">505</context>
</context-group>
</trans-unit>
<trans-unit id="1257522660364398440" datatype="html">
<source>This operation will remove the correspondent from <x id="PH" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">482</context>
<context context-type="linenumber">507</context>
</context-group>
</trans-unit>
<trans-unit id="5393409374423140648" datatype="html">
<source>Confirm document type assignment</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">516</context>
<context context-type="linenumber">541</context>
</context-group>
</trans-unit>
<trans-unit id="332180123895325027" datatype="html">
<source>This operation will assign the document type &quot;<x id="PH" equiv-text="documentType.name"/>&quot; to <x id="PH_1" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">518</context>
<context context-type="linenumber">543</context>
</context-group>
</trans-unit>
<trans-unit id="2236642492594872779" datatype="html">
<source>This operation will remove the document type from <x id="PH" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">520</context>
<context context-type="linenumber">545</context>
</context-group>
</trans-unit>
<trans-unit id="6386555513013840736" datatype="html">
<source>Confirm storage path assignment</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">554</context>
<context context-type="linenumber">579</context>
</context-group>
</trans-unit>
<trans-unit id="8750527458618415924" datatype="html">
<source>This operation will assign the storage path &quot;<x id="PH" equiv-text="storagePath.name"/>&quot; to <x id="PH_1" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">556</context>
<context context-type="linenumber">581</context>
</context-group>
</trans-unit>
<trans-unit id="60728365335056946" datatype="html">
<source>This operation will remove the storage path from <x id="PH" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">558</context>
<context context-type="linenumber">583</context>
</context-group>
</trans-unit>
<trans-unit id="4187352575310415704" datatype="html">
<source>Confirm custom field assignment</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">587</context>
<context context-type="linenumber">612</context>
</context-group>
</trans-unit>
<trans-unit id="7966494636326273856" datatype="html">
<source>This operation will assign the custom field &quot;<x id="PH" equiv-text="customField.name"/>&quot; to <x id="PH_1" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">593</context>
<context context-type="linenumber">618</context>
</context-group>
</trans-unit>
<trans-unit id="5789455969634598553" datatype="html">
@@ -8371,14 +8378,14 @@
)"/> to <x id="PH_1" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">598,600</context>
<context context-type="linenumber">623,625</context>
</context-group>
</trans-unit>
<trans-unit id="5648572354333199245" datatype="html">
<source>This operation will remove the custom field &quot;<x id="PH" equiv-text="customField.name"/>&quot; from <x id="PH_1" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">606</context>
<context context-type="linenumber">631</context>
</context-group>
</trans-unit>
<trans-unit id="6666899594015948817" datatype="html">
@@ -8387,7 +8394,7 @@
)"/> from <x id="PH_1" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">611,613</context>
<context context-type="linenumber">636,638</context>
</context-group>
</trans-unit>
<trans-unit id="8050047262594964176" datatype="html">
@@ -8398,91 +8405,91 @@
)"/> on <x id="PH_2" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">615,619</context>
<context context-type="linenumber">640,644</context>
</context-group>
</trans-unit>
<trans-unit id="8615059324209654051" datatype="html">
<source>Move <x id="PH" equiv-text="this.list.selected.size"/> selected document(s) to the trash?</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">757</context>
<context context-type="linenumber">782</context>
</context-group>
</trans-unit>
<trans-unit id="8585195717323764335" datatype="html">
<source>This operation will permanently recreate the archive files for <x id="PH" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">799</context>
<context context-type="linenumber">830</context>
</context-group>
</trans-unit>
<trans-unit id="7366623494074776040" datatype="html">
<source>The archive files will be re-generated with the current settings.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">800</context>
<context context-type="linenumber">831</context>
</context-group>
</trans-unit>
<trans-unit id="6555329262222566158" datatype="html">
<source>Rotate confirm</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">832</context>
<context context-type="linenumber">868</context>
</context-group>
</trans-unit>
<trans-unit id="5203024009814367559" datatype="html">
<source>This operation will add rotated versions of the <x id="PH" equiv-text="this.list.selected.size"/> document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">833</context>
<context context-type="linenumber">869</context>
</context-group>
</trans-unit>
<trans-unit id="7910756456450124185" datatype="html">
<source>Merge confirm</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">852</context>
<context context-type="linenumber">892</context>
</context-group>
</trans-unit>
<trans-unit id="7643543647233874431" datatype="html">
<source>This operation will merge <x id="PH" equiv-text="this.list.selected.size"/> selected documents into a new document.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">853</context>
<context context-type="linenumber">893</context>
</context-group>
</trans-unit>
<trans-unit id="7869008840945899895" datatype="html">
<source>Merged document will be queued for consumption.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">872</context>
<context context-type="linenumber">916</context>
</context-group>
</trans-unit>
<trans-unit id="476913782630693351" datatype="html">
<source>Custom fields updated.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">896</context>
<context context-type="linenumber">940</context>
</context-group>
</trans-unit>
<trans-unit id="3873496751167944011" datatype="html">
<source>Error updating custom fields.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">905</context>
<context context-type="linenumber">949</context>
</context-group>
</trans-unit>
<trans-unit id="6144801143088984138" datatype="html">
<source>Share link bundle creation requested.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">945</context>
<context context-type="linenumber">989</context>
</context-group>
</trans-unit>
<trans-unit id="46019676931295023" datatype="html">
<source>Share link bundle creation is not available yet.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">952</context>
<context context-type="linenumber">996</context>
</context-group>
</trans-unit>
<trans-unit id="6307402210351946694" datatype="html">

View File

@@ -10,10 +10,22 @@
<ul class="list-group"
cdkDropList
(cdkDropListDropped)="onDrop($event)">
@for (documentID of documentIDs; track documentID) {
<li class="list-group-item" cdkDrag>
@for (document of documents; track document.id) {
<li class="list-group-item d-flex align-items-center" cdkDrag>
<i-bs name="grip-vertical" class="me-2"></i-bs>
{{getDocument(documentID)?.title}}
<div class="d-flex flex-column">
<div>
@if (document.correspondent) {
<b>{{document.correspondent | correspondentName | async}}: </b>
}{{document.title}}
</div>
<small class="text-muted">
{{document.created | customDate:'mediumDate'}}
@if (document.page_count) {
| {document.page_count, plural, =1 {One page} other {{{document.page_count}} pages}}
}
</small>
</div>
</li>
}
</ul>

View File

@@ -3,11 +3,14 @@ import {
DragDropModule,
moveItemInArray,
} from '@angular/cdk/drag-drop'
import { AsyncPipe } from '@angular/common'
import { Component, OnInit, inject } from '@angular/core'
import { FormsModule, ReactiveFormsModule } from '@angular/forms'
import { NgxBootstrapIconsModule } from 'ngx-bootstrap-icons'
import { takeUntil } from 'rxjs'
import { Document } from 'src/app/data/document'
import { CorrespondentNamePipe } from 'src/app/pipes/correspondent-name.pipe'
import { CustomDatePipe } from 'src/app/pipes/custom-date.pipe'
import { PermissionsService } from 'src/app/services/permissions.service'
import { DocumentService } from 'src/app/services/rest/document.service'
import { ConfirmDialogComponent } from '../confirm-dialog.component'
@@ -17,6 +20,9 @@ import { ConfirmDialogComponent } from '../confirm-dialog.component'
templateUrl: './merge-confirm-dialog.component.html',
styleUrl: './merge-confirm-dialog.component.scss',
imports: [
AsyncPipe,
CorrespondentNamePipe,
CustomDatePipe,
DragDropModule,
FormsModule,
ReactiveFormsModule,

View File

@@ -31,8 +31,8 @@ export enum EditDialogMode {
@Directive()
export abstract class EditDialogComponent<
T extends ObjectWithPermissions | ObjectWithId,
>
T extends ObjectWithPermissions | ObjectWithId,
>
extends LoadingComponentWithPermissions
implements OnInit
{

View File

@@ -3,6 +3,7 @@ import { provideHttpClientTesting } from '@angular/common/http/testing'
import { ComponentFixture, TestBed } from '@angular/core/testing'
import { NgbActiveModal } from '@ng-bootstrap/ng-bootstrap'
import { NgxBootstrapIconsModule, allIcons } from 'ngx-bootstrap-icons'
import { DocumentService } from 'src/app/services/rest/document.service'
import { PDFEditorComponent } from './pdf-editor.component'
describe('PDFEditorComponent', () => {
@@ -139,4 +140,16 @@ describe('PDFEditorComponent', () => {
expect(component.pages[1].page).toBe(2)
expect(component.pages[2].page).toBe(3)
})
it('should include selected version in preview source when provided', () => {
const documentService = TestBed.inject(DocumentService)
const previewSpy = jest
.spyOn(documentService, 'getPreviewUrl')
.mockReturnValue('preview-version')
component.documentID = 3
component.versionID = 10
expect(component.pdfSrc).toBe('preview-version')
expect(previewSpy).toHaveBeenCalledWith(3, false, 10)
})
})

View File

@@ -46,6 +46,7 @@ export class PDFEditorComponent extends ConfirmDialogComponent {
activeModal: NgbActiveModal = inject(NgbActiveModal)
documentID: number
versionID?: number
pages: PageOperation[] = []
totalPages = 0
editMode: PdfEditorEditMode = this.settingsService.get(
@@ -55,7 +56,11 @@ export class PDFEditorComponent extends ConfirmDialogComponent {
includeMetadata: boolean = true
get pdfSrc(): string {
return this.documentService.getPreviewUrl(this.documentID)
return this.documentService.getPreviewUrl(
this.documentID,
false,
this.versionID
)
}
pdfLoaded(pdf: PngxPdfDocumentProxy) {

View File

@@ -950,8 +950,8 @@ describe('DocumentDetailComponent', () => {
it('should support reprocess, confirm and close modal after started', () => {
initNormally()
const bulkEditSpy = jest.spyOn(documentService, 'bulkEdit')
bulkEditSpy.mockReturnValue(of(true))
const reprocessSpy = jest.spyOn(documentService, 'reprocessDocuments')
reprocessSpy.mockReturnValue(of(true))
let openModal: NgbModalRef
modalService.activeInstances.subscribe((modal) => (openModal = modal[0]))
const modalSpy = jest.spyOn(modalService, 'open')
@@ -959,7 +959,7 @@ describe('DocumentDetailComponent', () => {
component.reprocess()
const modalCloseSpy = jest.spyOn(openModal, 'close')
openModal.componentInstance.confirmClicked.next()
expect(bulkEditSpy).toHaveBeenCalledWith([doc.id], 'reprocess', {})
expect(reprocessSpy).toHaveBeenCalledWith([doc.id])
expect(modalSpy).toHaveBeenCalled()
expect(toastSpy).toHaveBeenCalled()
expect(modalCloseSpy).toHaveBeenCalled()
@@ -967,13 +967,13 @@ describe('DocumentDetailComponent', () => {
it('should show error if redo ocr call fails', () => {
initNormally()
const bulkEditSpy = jest.spyOn(documentService, 'bulkEdit')
const reprocessSpy = jest.spyOn(documentService, 'reprocessDocuments')
let openModal: NgbModalRef
modalService.activeInstances.subscribe((modal) => (openModal = modal[0]))
const toastSpy = jest.spyOn(toastService, 'showError')
component.reprocess()
const modalCloseSpy = jest.spyOn(openModal, 'close')
bulkEditSpy.mockReturnValue(throwError(() => new Error('error occurred')))
reprocessSpy.mockReturnValue(throwError(() => new Error('error occurred')))
openModal.componentInstance.confirmClicked.next()
expect(toastSpy).toHaveBeenCalled()
expect(modalCloseSpy).not.toHaveBeenCalled()
@@ -1644,9 +1644,9 @@ describe('DocumentDetailComponent', () => {
expect(
fixture.debugElement.query(By.css('.preview-sticky img'))
).not.toBeUndefined()
;(component.document.mime_type =
;((component.document.mime_type =
'application/vnd.openxmlformats-officedocument.wordprocessingml.document'),
fixture.detectChanges()
fixture.detectChanges())
expect(component.archiveContentRenderType).toEqual(
component.ContentRenderType.Other
)
@@ -1661,23 +1661,23 @@ describe('DocumentDetailComponent', () => {
const closeSpy = jest.spyOn(openDocumentsService, 'closeDocument')
const errorSpy = jest.spyOn(toastService, 'showError')
initNormally()
component.selectedVersionId = 10
component.editPdf()
expect(modal).not.toBeUndefined()
modal.componentInstance.documentID = doc.id
expect(modal.componentInstance.versionID).toBe(10)
modal.componentInstance.pages = [{ page: 1, rotate: 0, splitAfter: false }]
modal.componentInstance.confirm()
let req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/edit_pdf/`
)
expect(req.request.body).toEqual({
documents: [doc.id],
method: 'edit_pdf',
parameters: {
operations: [{ page: 1, rotate: 0, doc: 0 }],
delete_original: false,
update_document: false,
include_metadata: true,
},
documents: [10],
operations: [{ page: 1, rotate: 0, doc: 0 }],
delete_original: false,
update_document: false,
include_metadata: true,
source_mode: 'explicit_selection',
})
req.error(new ErrorEvent('failed'))
expect(errorSpy).toHaveBeenCalled()
@@ -1688,7 +1688,7 @@ describe('DocumentDetailComponent', () => {
modal.componentInstance.deleteOriginal = true
modal.componentInstance.confirm()
req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/edit_pdf/`
)
req.flush(true)
expect(closeSpy).toHaveBeenCalled()
@@ -1698,6 +1698,7 @@ describe('DocumentDetailComponent', () => {
let modal: NgbModalRef
modalService.activeInstances.subscribe((m) => (modal = m[0]))
initNormally()
component.selectedVersionId = 10
component.password = 'secret'
component.removePassword()
const dialog =
@@ -1707,17 +1708,15 @@ describe('DocumentDetailComponent', () => {
dialog.deleteOriginal = true
dialog.confirm()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/remove_password/`
)
expect(req.request.body).toEqual({
documents: [doc.id],
method: 'remove_password',
parameters: {
password: 'secret',
update_document: false,
include_metadata: false,
delete_original: true,
},
documents: [10],
password: 'secret',
update_document: false,
include_metadata: false,
delete_original: true,
source_mode: 'explicit_selection',
})
req.flush(true)
})
@@ -1732,7 +1731,7 @@ describe('DocumentDetailComponent', () => {
expect(errorSpy).toHaveBeenCalled()
httpTestingController.expectNone(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/remove_password/`
)
})
@@ -1748,7 +1747,7 @@ describe('DocumentDetailComponent', () => {
modal.componentInstance as PasswordRemovalConfirmDialogComponent
dialog.confirm()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/remove_password/`
)
req.error(new ErrorEvent('failed'))
@@ -1769,7 +1768,7 @@ describe('DocumentDetailComponent', () => {
modal.componentInstance as PasswordRemovalConfirmDialogComponent
dialog.confirm()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/remove_password/`
)
req.flush(true)

View File

@@ -74,7 +74,10 @@ import {
import { CorrespondentService } from 'src/app/services/rest/correspondent.service'
import { CustomFieldsService } from 'src/app/services/rest/custom-fields.service'
import { DocumentTypeService } from 'src/app/services/rest/document-type.service'
import { DocumentService } from 'src/app/services/rest/document.service'
import {
BulkEditSourceMode,
DocumentService,
} from 'src/app/services/rest/document.service'
import { SavedViewService } from 'src/app/services/rest/saved-view.service'
import { StoragePathService } from 'src/app/services/rest/storage-path.service'
import { TagService } from 'src/app/services/rest/tag.service'
@@ -1376,27 +1379,25 @@ export class DocumentDetailComponent
modal.componentInstance.btnCaption = $localize`Proceed`
modal.componentInstance.confirmClicked.subscribe(() => {
modal.componentInstance.buttonsEnabled = false
this.documentsService
.bulkEdit([this.document.id], 'reprocess', {})
.subscribe({
next: () => {
this.toastService.showInfo(
$localize`Reprocess operation for "${this.document.title}" will begin in the background.`
)
if (modal) {
modal.close()
}
},
error: (error) => {
if (modal) {
modal.componentInstance.buttonsEnabled = true
}
this.toastService.showError(
$localize`Error executing operation`,
error
)
},
})
this.documentsService.reprocessDocuments([this.document.id]).subscribe({
next: () => {
this.toastService.showInfo(
$localize`Reprocess operation for "${this.document.title}" will begin in the background.`
)
if (modal) {
modal.close()
}
},
error: (error) => {
if (modal) {
modal.componentInstance.buttonsEnabled = true
}
this.toastService.showError(
$localize`Error executing operation`,
error
)
},
})
})
}
@@ -1753,20 +1754,23 @@ export class DocumentDetailComponent
size: 'xl',
scrollable: true,
})
const sourceDocumentId = this.selectedVersionId ?? this.document.id
modal.componentInstance.title = $localize`PDF Editor`
modal.componentInstance.btnCaption = $localize`Proceed`
modal.componentInstance.documentID = this.document.id
modal.componentInstance.versionID = sourceDocumentId
modal.componentInstance.confirmClicked
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
modal.componentInstance.buttonsEnabled = false
this.documentsService
.bulkEdit([this.document.id], 'edit_pdf', {
.editPdfDocuments([sourceDocumentId], {
operations: modal.componentInstance.getOperations(),
delete_original: modal.componentInstance.deleteOriginal,
update_document:
modal.componentInstance.editMode == PdfEditorEditMode.Update,
include_metadata: modal.componentInstance.includeMetadata,
source_mode: BulkEditSourceMode.EXPLICIT_SELECTION,
})
.pipe(first(), takeUntil(this.unsubscribeNotifier))
.subscribe({
@@ -1812,16 +1816,18 @@ export class DocumentDetailComponent
modal.componentInstance.confirmClicked
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
const sourceDocumentId = this.selectedVersionId ?? this.document.id
const dialog =
modal.componentInstance as PasswordRemovalConfirmDialogComponent
dialog.buttonsEnabled = false
this.networkActive = true
this.documentsService
.bulkEdit([this.document.id], 'remove_password', {
.removePasswordDocuments([sourceDocumentId], {
password: this.password,
update_document: dialog.updateDocument,
include_metadata: dialog.includeMetadata,
delete_original: dialog.deleteOriginal,
source_mode: BulkEditSourceMode.EXPLICIT_SELECTION,
})
.pipe(first(), takeUntil(this.unsubscribeNotifier))
.subscribe({

View File

@@ -1,3 +1,4 @@
import { DatePipe } from '@angular/common'
import { provideHttpClient, withInterceptorsFromDi } from '@angular/common/http'
import {
HttpTestingController,
@@ -138,6 +139,7 @@ describe('BulkEditorComponent', () => {
},
},
FilterPipe,
DatePipe,
SettingsService,
{
provide: UserService,
@@ -849,13 +851,11 @@ describe('BulkEditorComponent', () => {
expect(modal).not.toBeUndefined()
modal.componentInstance.confirm()
let req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/delete/`
)
req.flush(true)
expect(req.request.body).toEqual({
documents: [3, 4],
method: 'delete',
parameters: {},
})
httpTestingController.match(
`${environment.apiBaseUrl}documents/?page=1&page_size=50&ordering=-created&truncate_content=true`
@@ -868,7 +868,7 @@ describe('BulkEditorComponent', () => {
fixture.detectChanges()
component.applyDelete()
req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/delete/`
)
})
@@ -944,13 +944,11 @@ describe('BulkEditorComponent', () => {
expect(modal).not.toBeUndefined()
modal.componentInstance.confirm()
let req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/reprocess/`
)
req.flush(true)
expect(req.request.body).toEqual({
documents: [3, 4],
method: 'reprocess',
parameters: {},
})
httpTestingController.match(
`${environment.apiBaseUrl}documents/?page=1&page_size=50&ordering=-created&truncate_content=true`
@@ -979,13 +977,13 @@ describe('BulkEditorComponent', () => {
modal.componentInstance.rotate()
modal.componentInstance.confirm()
let req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/rotate/`
)
req.flush(true)
expect(req.request.body).toEqual({
documents: [3, 4],
method: 'rotate',
parameters: { degrees: 90 },
degrees: 90,
source_mode: 'latest_version',
})
httpTestingController.match(
`${environment.apiBaseUrl}documents/?page=1&page_size=50&ordering=-created&truncate_content=true`
@@ -1021,13 +1019,12 @@ describe('BulkEditorComponent', () => {
modal.componentInstance.metadataDocumentID = 3
modal.componentInstance.confirm()
let req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/merge/`
)
req.flush(true)
expect(req.request.body).toEqual({
documents: [3, 4],
method: 'merge',
parameters: { metadata_document_id: 3 },
metadata_document_id: 3,
})
httpTestingController.match(
`${environment.apiBaseUrl}documents/?page=1&page_size=50&ordering=-created&truncate_content=true`
@@ -1040,13 +1037,13 @@ describe('BulkEditorComponent', () => {
modal.componentInstance.deleteOriginals = true
modal.componentInstance.confirm()
req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/merge/`
)
req.flush(true)
expect(req.request.body).toEqual({
documents: [3, 4],
method: 'merge',
parameters: { metadata_document_id: 3, delete_originals: true },
metadata_document_id: 3,
delete_originals: true,
})
httpTestingController.match(
`${environment.apiBaseUrl}documents/?page=1&page_size=50&ordering=-created&truncate_content=true`
@@ -1061,13 +1058,13 @@ describe('BulkEditorComponent', () => {
modal.componentInstance.archiveFallback = true
modal.componentInstance.confirm()
req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/merge/`
)
req.flush(true)
expect(req.request.body).toEqual({
documents: [3, 4],
method: 'merge',
parameters: { metadata_document_id: 3, archive_fallback: true },
metadata_document_id: 3,
archive_fallback: true,
})
httpTestingController.match(
`${environment.apiBaseUrl}documents/?page=1&page_size=50&ordering=-created&truncate_content=true`

View File

@@ -12,7 +12,7 @@ import {
} from '@ng-bootstrap/ng-bootstrap'
import { saveAs } from 'file-saver'
import { NgxBootstrapIconsModule } from 'ngx-bootstrap-icons'
import { first, map, Subject, switchMap, takeUntil } from 'rxjs'
import { first, map, Observable, Subject, switchMap, takeUntil } from 'rxjs'
import { ConfirmDialogComponent } from 'src/app/components/common/confirm-dialog/confirm-dialog.component'
import { CustomField } from 'src/app/data/custom-field'
import { MatchingModel } from 'src/app/data/matching-model'
@@ -29,7 +29,9 @@ import { CorrespondentService } from 'src/app/services/rest/correspondent.servic
import { CustomFieldsService } from 'src/app/services/rest/custom-fields.service'
import { DocumentTypeService } from 'src/app/services/rest/document-type.service'
import {
DocumentBulkEditMethod,
DocumentService,
MergeDocumentsRequest,
SelectionDataItem,
} from 'src/app/services/rest/document.service'
import { SavedViewService } from 'src/app/services/rest/saved-view.service'
@@ -255,9 +257,9 @@ export class BulkEditorComponent
this.unsubscribeNotifier.complete()
}
private executeBulkOperation(
private executeBulkEditMethod(
modal: NgbModalRef,
method: string,
method: DocumentBulkEditMethod,
args: any,
overrideDocumentIDs?: number[]
) {
@@ -272,32 +274,55 @@ export class BulkEditorComponent
)
.pipe(first())
.subscribe({
next: () => {
if (args['delete_originals']) {
this.list.selected.clear()
}
this.list.reload()
this.list.reduceSelectionToFilter()
this.list.selected.forEach((id) => {
this.openDocumentService.refreshDocument(id)
})
this.savedViewService.maybeRefreshDocumentCounts()
if (modal) {
modal.close()
}
},
error: (error) => {
if (modal) {
modal.componentInstance.buttonsEnabled = true
}
this.toastService.showError(
$localize`Error executing bulk operation`,
error
)
},
next: () => this.handleOperationSuccess(modal),
error: (error) => this.handleOperationError(modal, error),
})
}
private executeDocumentAction(
modal: NgbModalRef,
request: Observable<any>,
options: { deleteOriginals?: boolean } = {}
) {
if (modal) {
modal.componentInstance.buttonsEnabled = false
}
request.pipe(first()).subscribe({
next: () => {
this.handleOperationSuccess(modal, options.deleteOriginals ?? false)
},
error: (error) => this.handleOperationError(modal, error),
})
}
private handleOperationSuccess(
modal: NgbModalRef,
clearSelection: boolean = false
) {
if (clearSelection) {
this.list.selected.clear()
}
this.list.reload()
this.list.reduceSelectionToFilter()
this.list.selected.forEach((id) => {
this.openDocumentService.refreshDocument(id)
})
this.savedViewService.maybeRefreshDocumentCounts()
if (modal) {
modal.close()
}
}
private handleOperationError(modal: NgbModalRef, error: any) {
if (modal) {
modal.componentInstance.buttonsEnabled = true
}
this.toastService.showError(
$localize`Error executing bulk operation`,
error
)
}
private applySelectionData(
items: SelectionDataItem[],
selectionModel: FilterableDropdownSelectionModel
@@ -446,13 +471,13 @@ export class BulkEditorComponent
modal.componentInstance.confirmClicked
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
this.executeBulkOperation(modal, 'modify_tags', {
this.executeBulkEditMethod(modal, 'modify_tags', {
add_tags: changedTags.itemsToAdd.map((t) => t.id),
remove_tags: changedTags.itemsToRemove.map((t) => t.id),
})
})
} else {
this.executeBulkOperation(null, 'modify_tags', {
this.executeBulkEditMethod(null, 'modify_tags', {
add_tags: changedTags.itemsToAdd.map((t) => t.id),
remove_tags: changedTags.itemsToRemove.map((t) => t.id),
})
@@ -486,12 +511,12 @@ export class BulkEditorComponent
modal.componentInstance.confirmClicked
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
this.executeBulkOperation(modal, 'set_correspondent', {
this.executeBulkEditMethod(modal, 'set_correspondent', {
correspondent: correspondent ? correspondent.id : null,
})
})
} else {
this.executeBulkOperation(null, 'set_correspondent', {
this.executeBulkEditMethod(null, 'set_correspondent', {
correspondent: correspondent ? correspondent.id : null,
})
}
@@ -524,12 +549,12 @@ export class BulkEditorComponent
modal.componentInstance.confirmClicked
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
this.executeBulkOperation(modal, 'set_document_type', {
this.executeBulkEditMethod(modal, 'set_document_type', {
document_type: documentType ? documentType.id : null,
})
})
} else {
this.executeBulkOperation(null, 'set_document_type', {
this.executeBulkEditMethod(null, 'set_document_type', {
document_type: documentType ? documentType.id : null,
})
}
@@ -562,12 +587,12 @@ export class BulkEditorComponent
modal.componentInstance.confirmClicked
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
this.executeBulkOperation(modal, 'set_storage_path', {
this.executeBulkEditMethod(modal, 'set_storage_path', {
storage_path: storagePath ? storagePath.id : null,
})
})
} else {
this.executeBulkOperation(null, 'set_storage_path', {
this.executeBulkEditMethod(null, 'set_storage_path', {
storage_path: storagePath ? storagePath.id : null,
})
}
@@ -624,7 +649,7 @@ export class BulkEditorComponent
modal.componentInstance.confirmClicked
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
this.executeBulkOperation(modal, 'modify_custom_fields', {
this.executeBulkEditMethod(modal, 'modify_custom_fields', {
add_custom_fields: changedCustomFields.itemsToAdd.map((f) => f.id),
remove_custom_fields: changedCustomFields.itemsToRemove.map(
(f) => f.id
@@ -632,7 +657,7 @@ export class BulkEditorComponent
})
})
} else {
this.executeBulkOperation(null, 'modify_custom_fields', {
this.executeBulkEditMethod(null, 'modify_custom_fields', {
add_custom_fields: changedCustomFields.itemsToAdd.map((f) => f.id),
remove_custom_fields: changedCustomFields.itemsToRemove.map(
(f) => f.id
@@ -762,10 +787,16 @@ export class BulkEditorComponent
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
modal.componentInstance.buttonsEnabled = false
this.executeBulkOperation(modal, 'delete', {})
this.executeDocumentAction(
modal,
this.documentService.deleteDocuments(Array.from(this.list.selected))
)
})
} else {
this.executeBulkOperation(null, 'delete', {})
this.executeDocumentAction(
null,
this.documentService.deleteDocuments(Array.from(this.list.selected))
)
}
}
@@ -804,7 +835,12 @@ export class BulkEditorComponent
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
modal.componentInstance.buttonsEnabled = false
this.executeBulkOperation(modal, 'reprocess', {})
this.executeDocumentAction(
modal,
this.documentService.reprocessDocuments(
Array.from(this.list.selected)
)
)
})
}
@@ -815,7 +851,7 @@ export class BulkEditorComponent
modal.componentInstance.confirmClicked.subscribe(
({ permissions, merge }) => {
modal.componentInstance.buttonsEnabled = false
this.executeBulkOperation(modal, 'set_permissions', {
this.executeBulkEditMethod(modal, 'set_permissions', {
...permissions,
merge,
})
@@ -838,9 +874,13 @@ export class BulkEditorComponent
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
rotateDialog.buttonsEnabled = false
this.executeBulkOperation(modal, 'rotate', {
degrees: rotateDialog.degrees,
})
this.executeDocumentAction(
modal,
this.documentService.rotateDocuments(
Array.from(this.list.selected),
rotateDialog.degrees
)
)
})
}
@@ -856,18 +896,22 @@ export class BulkEditorComponent
mergeDialog.confirmClicked
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
const args = {}
const args: MergeDocumentsRequest = {}
if (mergeDialog.metadataDocumentID > -1) {
args['metadata_document_id'] = mergeDialog.metadataDocumentID
args.metadata_document_id = mergeDialog.metadataDocumentID
}
if (mergeDialog.deleteOriginals) {
args['delete_originals'] = true
args.delete_originals = true
}
if (mergeDialog.archiveFallback) {
args['archive_fallback'] = true
args.archive_fallback = true
}
mergeDialog.buttonsEnabled = false
this.executeBulkOperation(modal, 'merge', args, mergeDialog.documentIDs)
this.executeDocumentAction(
modal,
this.documentService.mergeDocuments(mergeDialog.documentIDs, args),
{ deleteOriginals: !!args.delete_originals }
)
this.toastService.showInfo(
$localize`Merged document will be queued for consumption.`
)

View File

@@ -230,6 +230,88 @@ describe(`DocumentService`, () => {
})
})
it('should call appropriate api endpoint for delete documents', () => {
const ids = [1, 2, 3]
subscription = service.deleteDocuments(ids).subscribe()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}${endpoint}/delete/`
)
expect(req.request.method).toEqual('POST')
expect(req.request.body).toEqual({
documents: ids,
})
})
it('should call appropriate api endpoint for reprocess documents', () => {
const ids = [1, 2, 3]
subscription = service.reprocessDocuments(ids).subscribe()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}${endpoint}/reprocess/`
)
expect(req.request.method).toEqual('POST')
expect(req.request.body).toEqual({
documents: ids,
})
})
it('should call appropriate api endpoint for rotate documents', () => {
const ids = [1, 2, 3]
subscription = service.rotateDocuments(ids, 90).subscribe()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}${endpoint}/rotate/`
)
expect(req.request.method).toEqual('POST')
expect(req.request.body).toEqual({
documents: ids,
degrees: 90,
source_mode: 'latest_version',
})
})
it('should call appropriate api endpoint for merge documents', () => {
const ids = [1, 2, 3]
const args = { metadata_document_id: 1, delete_originals: true }
subscription = service.mergeDocuments(ids, args).subscribe()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}${endpoint}/merge/`
)
expect(req.request.method).toEqual('POST')
expect(req.request.body).toEqual({
documents: ids,
metadata_document_id: 1,
delete_originals: true,
})
})
it('should call appropriate api endpoint for edit pdf', () => {
const ids = [1]
const args = { operations: [{ page: 1, rotate: 90, doc: 0 }] }
subscription = service.editPdfDocuments(ids, args).subscribe()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}${endpoint}/edit_pdf/`
)
expect(req.request.method).toEqual('POST')
expect(req.request.body).toEqual({
documents: ids,
operations: [{ page: 1, rotate: 90, doc: 0 }],
})
})
it('should call appropriate api endpoint for remove password', () => {
const ids = [1]
const args = { password: 'secret', update_document: true }
subscription = service.removePasswordDocuments(ids, args).subscribe()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}${endpoint}/remove_password/`
)
expect(req.request.method).toEqual('POST')
expect(req.request.body).toEqual({
documents: ids,
password: 'secret',
update_document: true,
})
})
it('should return the correct preview URL for a single document', () => {
let url = service.getPreviewUrl(documents[0].id)
expect(url).toEqual(

View File

@@ -37,6 +37,50 @@ export interface SelectionData {
selected_custom_fields: SelectionDataItem[]
}
export enum BulkEditSourceMode {
LATEST_VERSION = 'latest_version',
EXPLICIT_SELECTION = 'explicit_selection',
}
export type DocumentBulkEditMethod =
| 'set_correspondent'
| 'set_document_type'
| 'set_storage_path'
| 'add_tag'
| 'remove_tag'
| 'modify_tags'
| 'modify_custom_fields'
| 'set_permissions'
export interface MergeDocumentsRequest {
metadata_document_id?: number
delete_originals?: boolean
archive_fallback?: boolean
source_mode?: BulkEditSourceMode
}
export interface EditPdfOperation {
page: number
rotate?: number
doc?: number
}
export interface EditPdfDocumentsRequest {
operations: EditPdfOperation[]
delete_original?: boolean
update_document?: boolean
include_metadata?: boolean
source_mode?: BulkEditSourceMode
}
export interface RemovePasswordDocumentsRequest {
password: string
update_document?: boolean
delete_original?: boolean
include_metadata?: boolean
source_mode?: BulkEditSourceMode
}
@Injectable({
providedIn: 'root',
})
@@ -294,7 +338,7 @@ export class DocumentService extends AbstractPaperlessService<Document> {
return this.http.get<DocumentMetadata>(url.toString())
}
bulkEdit(ids: number[], method: string, args: any) {
bulkEdit(ids: number[], method: DocumentBulkEditMethod, args: any) {
return this.http.post(this.getResourceUrl(null, 'bulk_edit'), {
documents: ids,
method: method,
@@ -302,6 +346,54 @@ export class DocumentService extends AbstractPaperlessService<Document> {
})
}
deleteDocuments(ids: number[]) {
return this.http.post(this.getResourceUrl(null, 'delete'), {
documents: ids,
})
}
reprocessDocuments(ids: number[]) {
return this.http.post(this.getResourceUrl(null, 'reprocess'), {
documents: ids,
})
}
rotateDocuments(
ids: number[],
degrees: number,
sourceMode: BulkEditSourceMode = BulkEditSourceMode.LATEST_VERSION
) {
return this.http.post(this.getResourceUrl(null, 'rotate'), {
documents: ids,
degrees,
source_mode: sourceMode,
})
}
mergeDocuments(ids: number[], request: MergeDocumentsRequest = {}) {
return this.http.post(this.getResourceUrl(null, 'merge'), {
documents: ids,
...request,
})
}
editPdfDocuments(ids: number[], request: EditPdfDocumentsRequest) {
return this.http.post(this.getResourceUrl(null, 'edit_pdf'), {
documents: ids,
...request,
})
}
removePasswordDocuments(
ids: number[],
request: RemovePasswordDocumentsRequest
) {
return this.http.post(this.getResourceUrl(null, 'remove_password'), {
documents: ids,
...request,
})
}
getSelectionData(ids: number[]): Observable<SelectionData> {
return this.http.post<SelectionData>(
this.getResourceUrl(null, 'selection_data'),

View File

@@ -62,7 +62,7 @@ export function hslToRgb(h, s, l) {
* @return Array The HSL representation
*/
export function rgbToHsl(r, g, b) {
;(r /= 255), (g /= 255), (b /= 255)
;((r /= 255), (g /= 255), (b /= 255))
var max = Math.max(r, g, b),
min = Math.min(r, g, b)
var h,

View File

@@ -5,7 +5,7 @@
export const environment = {
production: false,
apiBaseUrl: 'http://localhost:8000/api/',
apiVersion: '9',
apiVersion: '10',
appTitle: 'Paperless-ngx',
tag: 'dev',
version: 'DEVELOPMENT',

View File

@@ -29,12 +29,21 @@ from documents.plugins.helpers import DocumentsStatusManager
from documents.tasks import bulk_update_documents
from documents.tasks import consume_file
from documents.tasks import update_document_content_maybe_archive_file
from documents.versioning import get_latest_version_for_root
from documents.versioning import get_root_document
if TYPE_CHECKING:
from django.contrib.auth.models import User
logger: logging.Logger = logging.getLogger("paperless.bulk_edit")
SourceMode = Literal["latest_version", "explicit_selection"]
class SourceModeChoices:
LATEST_VERSION: SourceMode = "latest_version"
EXPLICIT_SELECTION: SourceMode = "explicit_selection"
@shared_task(bind=True)
def restore_archive_serial_numbers_task(
@@ -72,46 +81,21 @@ def restore_archive_serial_numbers(backup: dict[int, int | None]) -> None:
logger.info(f"Restored archive serial numbers for documents {list(backup.keys())}")
def _get_root_ids_by_doc_id(doc_ids: list[int]) -> dict[int, int]:
"""
Resolve each provided document id to its root document id.
def _resolve_root_and_source_doc(
doc: Document,
*,
source_mode: SourceMode = SourceModeChoices.LATEST_VERSION,
) -> tuple[Document, Document]:
root_doc = get_root_document(doc)
- If the id is already a root document: root id is itself.
- If the id is a version document: root id is its `root_document_id`.
"""
qs = Document.objects.filter(id__in=doc_ids).only("id", "root_document_id")
return {doc.id: doc.root_document_id or doc.id for doc in qs}
if source_mode == SourceModeChoices.EXPLICIT_SELECTION:
return root_doc, doc
# Version IDs are explicit by default, only a selected root resolves to latest
if doc.root_document_id is not None:
return root_doc, doc
def _get_root_and_current_docs_by_root_id(
root_ids: set[int],
) -> tuple[dict[int, Document], dict[int, Document]]:
"""
Returns:
- root_docs: root_id -> root Document
- current_docs: root_id -> newest version Document (or root if none)
"""
root_docs = {
doc.id: doc
for doc in Document.objects.filter(id__in=root_ids).select_related(
"owner",
)
}
latest_versions_by_root_id: dict[int, Document] = {}
for version_doc in Document.objects.filter(root_document_id__in=root_ids).order_by(
"root_document_id",
"-id",
):
root_id = version_doc.root_document_id
if root_id is None:
continue
latest_versions_by_root_id.setdefault(root_id, version_doc)
current_docs: dict[int, Document] = {
root_id: latest_versions_by_root_id.get(root_id, root_docs[root_id])
for root_id in root_docs
}
return root_docs, current_docs
return root_doc, get_latest_version_for_root(root_doc)
def set_correspondent(
@@ -421,21 +405,32 @@ def rotate(
doc_ids: list[int],
degrees: int,
*,
source_mode: SourceMode = SourceModeChoices.LATEST_VERSION,
user: User | None = None,
) -> Literal["OK"]:
logger.info(
f"Attempting to rotate {len(doc_ids)} documents by {degrees} degrees.",
)
doc_to_root_id = _get_root_ids_by_doc_id(doc_ids)
root_ids = set(doc_to_root_id.values())
root_docs_by_id, current_docs_by_root_id = _get_root_and_current_docs_by_root_id(
root_ids,
)
docs_by_id = {
doc.id: doc
for doc in Document.objects.select_related("root_document").filter(
id__in=doc_ids,
)
}
docs_by_root_id: dict[int, tuple[Document, Document]] = {}
for doc_id in doc_ids:
doc = docs_by_id.get(doc_id)
if doc is None:
continue
root_doc, source_doc = _resolve_root_and_source_doc(
doc,
source_mode=source_mode,
)
docs_by_root_id.setdefault(root_doc.id, (root_doc, source_doc))
import pikepdf
for root_id in root_ids:
root_doc = root_docs_by_id[root_id]
source_doc = current_docs_by_root_id[root_id]
for root_doc, source_doc in docs_by_root_id.values():
if source_doc.mime_type != "application/pdf":
logger.warning(
f"Document {root_doc.id} is not a PDF, skipping rotation.",
@@ -481,12 +476,14 @@ def merge(
metadata_document_id: int | None = None,
delete_originals: bool = False,
archive_fallback: bool = False,
source_mode: SourceMode = SourceModeChoices.LATEST_VERSION,
user: User | None = None,
) -> Literal["OK"]:
logger.info(
f"Attempting to merge {len(doc_ids)} documents into a single document.",
)
qs = Document.objects.filter(id__in=doc_ids)
qs = Document.objects.select_related("root_document").filter(id__in=doc_ids)
docs_by_id = {doc.id: doc for doc in qs}
affected_docs: list[int] = []
import pikepdf
@@ -495,14 +492,20 @@ def merge(
handoff_asn: int | None = None
# use doc_ids to preserve order
for doc_id in doc_ids:
doc = qs.get(id=doc_id)
doc = docs_by_id.get(doc_id)
if doc is None:
continue
_, source_doc = _resolve_root_and_source_doc(
doc,
source_mode=source_mode,
)
try:
doc_path = (
doc.archive_path
source_doc.archive_path
if archive_fallback
and doc.mime_type != "application/pdf"
and doc.has_archive_version
else doc.source_path
and source_doc.mime_type != "application/pdf"
and source_doc.has_archive_version
else source_doc.source_path
)
with pikepdf.open(str(doc_path)) as pdf:
version = max(version, pdf.pdf_version)
@@ -584,18 +587,23 @@ def split(
pages: list[list[int]],
*,
delete_originals: bool = False,
source_mode: SourceMode = SourceModeChoices.LATEST_VERSION,
user: User | None = None,
) -> Literal["OK"]:
logger.info(
f"Attempting to split document {doc_ids[0]} into {len(pages)} documents",
)
doc = Document.objects.get(id=doc_ids[0])
doc = Document.objects.select_related("root_document").get(id=doc_ids[0])
_, source_doc = _resolve_root_and_source_doc(
doc,
source_mode=source_mode,
)
import pikepdf
consume_tasks = []
try:
with pikepdf.open(doc.source_path) as pdf:
with pikepdf.open(source_doc.source_path) as pdf:
for idx, split_doc in enumerate(pages):
dst: pikepdf.Pdf = pikepdf.new()
for page in split_doc:
@@ -659,25 +667,17 @@ def delete_pages(
doc_ids: list[int],
pages: list[int],
*,
source_mode: SourceMode = SourceModeChoices.LATEST_VERSION,
user: User | None = None,
) -> Literal["OK"]:
logger.info(
f"Attempting to delete pages {pages} from {len(doc_ids)} documents",
)
doc = Document.objects.select_related("root_document").get(id=doc_ids[0])
root_doc: Document
if doc.root_document_id is None or doc.root_document is None:
root_doc = doc
else:
root_doc = doc.root_document
source_doc = (
Document.objects.filter(Q(id=root_doc.id) | Q(root_document=root_doc))
.order_by("-id")
.first()
root_doc, source_doc = _resolve_root_and_source_doc(
doc,
source_mode=source_mode,
)
if source_doc is None:
source_doc = root_doc
pages = sorted(pages) # sort pages to avoid index issues
import pikepdf
@@ -722,6 +722,7 @@ def edit_pdf(
delete_original: bool = False,
update_document: bool = False,
include_metadata: bool = True,
source_mode: SourceMode = SourceModeChoices.LATEST_VERSION,
user: User | None = None,
) -> Literal["OK"]:
"""
@@ -736,19 +737,10 @@ def edit_pdf(
f"Editing PDF of document {doc_ids[0]} with {len(operations)} operations",
)
doc = Document.objects.select_related("root_document").get(id=doc_ids[0])
root_doc: Document
if doc.root_document_id is None or doc.root_document is None:
root_doc = doc
else:
root_doc = doc.root_document
source_doc = (
Document.objects.filter(Q(id=root_doc.id) | Q(root_document=root_doc))
.order_by("-id")
.first()
root_doc, source_doc = _resolve_root_and_source_doc(
doc,
source_mode=source_mode,
)
if source_doc is None:
source_doc = root_doc
import pikepdf
pdf_docs: list[pikepdf.Pdf] = []
@@ -859,6 +851,7 @@ def remove_password(
update_document: bool = False,
delete_original: bool = False,
include_metadata: bool = True,
source_mode: SourceMode = SourceModeChoices.LATEST_VERSION,
user: User | None = None,
) -> Literal["OK"]:
"""
@@ -868,19 +861,10 @@ def remove_password(
for doc_id in doc_ids:
doc = Document.objects.select_related("root_document").get(id=doc_id)
root_doc: Document
if doc.root_document_id is None or doc.root_document is None:
root_doc = doc
else:
root_doc = doc.root_document
source_doc = (
Document.objects.filter(Q(id=root_doc.id) | Q(root_document=root_doc))
.order_by("-id")
.first()
root_doc, source_doc = _resolve_root_and_source_doc(
doc,
source_mode=source_mode,
)
if source_doc is None:
source_doc = root_doc
try:
logger.info(
f"Attempting password removal from document {doc_ids[0]}",

View File

@@ -51,11 +51,29 @@ from documents.templating.workflows import parse_w_workflow_placeholders
from documents.utils import copy_basic_file_stats
from documents.utils import copy_file_with_basic_stats
from documents.utils import run_subprocess
from paperless.parsers.remote import RemoteDocumentParser
from paperless.parsers.text import TextDocumentParser
from paperless_mail.parsers import MailDocumentParser
LOGGING_NAME: Final[str] = "paperless.consumer"
def _parser_cleanup(parser: DocumentParser) -> None:
"""
Call cleanup on a parser, handling the new-style context-manager parsers.
New-style parsers (e.g. TextDocumentParser) use __exit__ for teardown
instead of a cleanup() method. This shim will be removed once all existing parsers
have switched to the new style and this consumer is updated to use it
TODO(stumpylog): Remove me in the future
"""
if isinstance(parser, (TextDocumentParser, RemoteDocumentParser)):
parser.__exit__(None, None, None)
else:
parser.cleanup()
class WorkflowTriggerPlugin(
NoCleanupPluginMixin,
NoSetupPluginMixin,
@@ -459,6 +477,12 @@ class ConsumerPlugin(
self.filename,
self.input_doc.mailrule_id,
)
elif isinstance(
document_parser,
(TextDocumentParser, RemoteDocumentParser),
):
# TODO(stumpylog): Remove me in the future
document_parser.parse(self.working_copy, mime_type)
else:
document_parser.parse(self.working_copy, mime_type, self.filename)
@@ -469,11 +493,15 @@ class ConsumerPlugin(
ProgressStatusOptions.WORKING,
ConsumerStatusShortMessage.GENERATING_THUMBNAIL,
)
thumbnail = document_parser.get_thumbnail(
self.working_copy,
mime_type,
self.filename,
)
if isinstance(document_parser, (TextDocumentParser, RemoteDocumentParser)):
# TODO(stumpylog): Remove me in the future
thumbnail = document_parser.get_thumbnail(self.working_copy, mime_type)
else:
thumbnail = document_parser.get_thumbnail(
self.working_copy,
mime_type,
self.filename,
)
text = document_parser.get_text()
date = document_parser.get_date()
@@ -490,7 +518,7 @@ class ConsumerPlugin(
page_count = document_parser.get_page_count(self.working_copy, mime_type)
except ParseError as e:
document_parser.cleanup()
_parser_cleanup(document_parser)
if tempdir:
tempdir.cleanup()
self._fail(
@@ -500,7 +528,7 @@ class ConsumerPlugin(
exception=e,
)
except Exception as e:
document_parser.cleanup()
_parser_cleanup(document_parser)
if tempdir:
tempdir.cleanup()
self._fail(
@@ -702,7 +730,7 @@ class ConsumerPlugin(
exception=e,
)
finally:
document_parser.cleanup()
_parser_cleanup(document_parser)
tempdir.cleanup()
self.run_post_consume_script(document)

View File

@@ -304,7 +304,7 @@ class PaperlessCommand(RichCommand):
Progress output is directed to stderr to match the convention that
progress bars are transient UI feedback, not command output. This
mirrors tqdm's default behavior and prevents progress bar rendering
mirrors the convention that progress bars are transient UI feedback and prevents progress bar rendering
from interfering with stdout-based assertions in tests or piped
command output.

View File

@@ -17,6 +17,7 @@ class Command(PaperlessCommand):
"modified) after their initial import."
)
supports_progress_bar = True
supports_multiprocessing = True
def add_arguments(self, parser):

View File

@@ -3,12 +3,10 @@ import json
import os
import shutil
import tempfile
from itertools import chain
from itertools import islice
from pathlib import Path
from typing import TYPE_CHECKING
import tqdm
from allauth.mfa.models import Authenticator
from allauth.socialaccount.models import SocialAccount
from allauth.socialaccount.models import SocialApp
@@ -19,7 +17,6 @@ from django.contrib.auth.models import Permission
from django.contrib.auth.models import User
from django.contrib.contenttypes.models import ContentType
from django.core import serializers
from django.core.management.base import BaseCommand
from django.core.management.base import CommandError
from django.core.serializers.json import DjangoJSONEncoder
from django.db import transaction
@@ -38,6 +35,7 @@ if settings.AUDIT_LOG_ENABLED:
from documents.file_handling import delete_empty_directories
from documents.file_handling import generate_filename
from documents.management.commands.base import PaperlessCommand
from documents.management.commands.mixins import CryptMixin
from documents.models import Correspondent
from documents.models import CustomField
@@ -81,14 +79,99 @@ def serialize_queryset_batched(
yield serializers.serialize("python", chunk)
class Command(CryptMixin, BaseCommand):
class StreamingManifestWriter:
"""Incrementally writes a JSON array to a file, one record at a time.
Writes to <target>.tmp first; on close(), optionally BLAKE2b-compares
with the existing file (--compare-json) and renames or discards accordingly.
On exception, discard() deletes the tmp file and leaves the original intact.
"""
def __init__(
self,
path: Path,
*,
compare_json: bool = False,
files_in_export_dir: "set[Path] | None" = None,
) -> None:
self._path = path.resolve()
self._tmp_path = self._path.with_suffix(self._path.suffix + ".tmp")
self._compare_json = compare_json
self._files_in_export_dir: set[Path] = (
files_in_export_dir if files_in_export_dir is not None else set()
)
self._file = None
self._first = True
def open(self) -> None:
self._path.parent.mkdir(parents=True, exist_ok=True)
self._file = self._tmp_path.open("w", encoding="utf-8")
self._file.write("[")
self._first = True
def write_record(self, record: dict) -> None:
if not self._first:
self._file.write(",\n")
else:
self._first = False
self._file.write(
json.dumps(record, cls=DjangoJSONEncoder, indent=2, ensure_ascii=False),
)
def write_batch(self, records: list[dict]) -> None:
for record in records:
self.write_record(record)
def close(self) -> None:
if self._file is None:
return
self._file.write("\n]")
self._file.close()
self._file = None
self._finalize()
def discard(self) -> None:
if self._file is not None:
self._file.close()
self._file = None
if self._tmp_path.exists():
self._tmp_path.unlink()
def _finalize(self) -> None:
"""Compare with existing file (if --compare-json) then rename or discard tmp."""
if self._path in self._files_in_export_dir:
self._files_in_export_dir.remove(self._path)
if self._compare_json:
existing_hash = hashlib.blake2b(self._path.read_bytes()).hexdigest()
new_hash = hashlib.blake2b(self._tmp_path.read_bytes()).hexdigest()
if existing_hash == new_hash:
self._tmp_path.unlink()
return
self._tmp_path.rename(self._path)
def __enter__(self) -> "StreamingManifestWriter":
self.open()
return self
def __exit__(self, exc_type, exc_val, exc_tb) -> None:
if exc_type is not None:
self.discard()
else:
self.close()
class Command(CryptMixin, PaperlessCommand):
help = (
"Decrypt and rename all files in our collection into a given target "
"directory. And include a manifest file containing document data for "
"easy import."
)
supports_progress_bar = True
supports_multiprocessing = False
def add_arguments(self, parser) -> None:
super().add_arguments(parser)
parser.add_argument("target")
parser.add_argument(
@@ -195,13 +278,6 @@ class Command(CryptMixin, BaseCommand):
help="If set, only the database will be imported, not files",
)
parser.add_argument(
"--no-progress-bar",
default=False,
action="store_true",
help="If set, the progress bar will not be shown",
)
parser.add_argument(
"--passphrase",
help="If provided, is used to encrypt sensitive data in the export",
@@ -230,7 +306,6 @@ class Command(CryptMixin, BaseCommand):
self.no_thumbnail: bool = options["no_thumbnail"]
self.zip_export: bool = options["zip"]
self.data_only: bool = options["data_only"]
self.no_progress_bar: bool = options["no_progress_bar"]
self.passphrase: str | None = options.get("passphrase")
self.batch_size: int = options["batch_size"]
@@ -322,95 +397,85 @@ class Command(CryptMixin, BaseCommand):
if settings.AUDIT_LOG_ENABLED:
manifest_key_to_object_query["log_entries"] = LogEntry.objects.all()
with transaction.atomic():
manifest_dict = {}
# Build an overall manifest
for key, object_query in manifest_key_to_object_query.items():
manifest_dict[key] = list(
chain.from_iterable(
serialize_queryset_batched(
object_query,
batch_size=self.batch_size,
),
),
)
self.encrypt_secret_fields(manifest_dict)
# These are treated specially and included in the per-document manifest
# if that setting is enabled. Otherwise, they are just exported to the bulk
# manifest
document_map: dict[int, Document] = {
d.pk: d for d in manifest_key_to_object_query["documents"]
}
document_manifest = manifest_dict["documents"]
# 3. Export files from each document
for index, document_dict in tqdm.tqdm(
enumerate(document_manifest),
total=len(document_manifest),
disable=self.no_progress_bar,
):
document = document_map[document_dict["pk"]]
# 3.1. generate a unique filename
base_name = self.generate_base_name(document)
# 3.2. write filenames into manifest
original_target, thumbnail_target, archive_target = (
self.generate_document_targets(document, base_name, document_dict)
# Crypto setup before streaming begins
if self.passphrase:
self.setup_crypto(passphrase=self.passphrase)
elif MailAccount.objects.count() > 0 or SocialToken.objects.count() > 0:
self.stdout.write(
self.style.NOTICE(
"No passphrase was given, sensitive fields will be in plaintext",
),
)
# 3.3. write files to target folder
if not self.data_only:
self.copy_document_files(
document,
original_target,
thumbnail_target,
archive_target,
)
if self.split_manifest:
manifest_name = base_name.with_name(f"{base_name.stem}-manifest.json")
if self.use_folder_prefix:
manifest_name = Path("json") / manifest_name
manifest_name = (self.target / manifest_name).resolve()
manifest_name.parent.mkdir(parents=True, exist_ok=True)
content = [document_manifest[index]]
content += list(
filter(
lambda d: d["fields"]["document"] == document_dict["pk"],
manifest_dict["notes"],
),
)
content += list(
filter(
lambda d: d["fields"]["document"] == document_dict["pk"],
manifest_dict["custom_field_instances"],
),
)
self.check_and_write_json(
content,
manifest_name,
)
# These were exported already
if self.split_manifest:
del manifest_dict["documents"]
del manifest_dict["notes"]
del manifest_dict["custom_field_instances"]
# 4.1 write primary manifest to target folder
manifest = []
for key, item in manifest_dict.items():
manifest.extend(item)
document_manifest: list[dict] = []
manifest_path = (self.target / "manifest.json").resolve()
self.check_and_write_json(
manifest,
with StreamingManifestWriter(
manifest_path,
)
compare_json=self.compare_json,
files_in_export_dir=self.files_in_export_dir,
) as writer:
with transaction.atomic():
for key, qs in manifest_key_to_object_query.items():
if key == "documents":
# Accumulate for file-copy loop; written to manifest after
for batch in serialize_queryset_batched(
qs,
batch_size=self.batch_size,
):
for record in batch:
self._encrypt_record_inline(record)
document_manifest.extend(batch)
elif self.split_manifest and key in (
"notes",
"custom_field_instances",
):
# Written per-document in _write_split_manifest
pass
else:
for batch in serialize_queryset_batched(
qs,
batch_size=self.batch_size,
):
for record in batch:
self._encrypt_record_inline(record)
writer.write_batch(batch)
document_map: dict[int, Document] = {
d.pk: d for d in Document.objects.order_by("id")
}
# 3. Export files from each document
for index, document_dict in enumerate(
self.track(
document_manifest,
description="Exporting documents...",
total=len(document_manifest),
),
):
document = document_map[document_dict["pk"]]
# 3.1. generate a unique filename
base_name = self.generate_base_name(document)
# 3.2. write filenames into manifest
original_target, thumbnail_target, archive_target = (
self.generate_document_targets(document, base_name, document_dict)
)
# 3.3. write files to target folder
if not self.data_only:
self.copy_document_files(
document,
original_target,
thumbnail_target,
archive_target,
)
if self.split_manifest:
self._write_split_manifest(document_dict, document, base_name)
else:
writer.write_record(document_dict)
# 4.2 write version information to target folder
extra_metadata_path = (self.target / "metadata.json").resolve()
@@ -532,6 +597,42 @@ class Command(CryptMixin, BaseCommand):
archive_target,
)
def _encrypt_record_inline(self, record: dict) -> None:
"""Encrypt sensitive fields in a single record, if passphrase is set."""
if not self.passphrase:
return
fields = self.CRYPT_FIELDS_BY_MODEL.get(record.get("model", ""))
if fields:
for field in fields:
if record["fields"].get(field):
record["fields"][field] = self.encrypt_string(
value=record["fields"][field],
)
def _write_split_manifest(
self,
document_dict: dict,
document: Document,
base_name: Path,
) -> None:
"""Write per-document manifest file for --split-manifest mode."""
content = [document_dict]
content.extend(
serializers.serialize("python", Note.objects.filter(document=document)),
)
content.extend(
serializers.serialize(
"python",
CustomFieldInstance.objects.filter(document=document),
),
)
manifest_name = base_name.with_name(f"{base_name.stem}-manifest.json")
if self.use_folder_prefix:
manifest_name = Path("json") / manifest_name
manifest_name = (self.target / manifest_name).resolve()
manifest_name.parent.mkdir(parents=True, exist_ok=True)
self.check_and_write_json(content, manifest_name)
def check_and_write_json(
self,
content: list[dict] | dict,
@@ -549,14 +650,14 @@ class Command(CryptMixin, BaseCommand):
if target in self.files_in_export_dir:
self.files_in_export_dir.remove(target)
if self.compare_json:
target_checksum = hashlib.md5(target.read_bytes()).hexdigest()
target_checksum = hashlib.blake2b(target.read_bytes()).hexdigest()
src_str = json.dumps(
content,
cls=DjangoJSONEncoder,
indent=2,
ensure_ascii=False,
)
src_checksum = hashlib.md5(src_str.encode("utf-8")).hexdigest()
src_checksum = hashlib.blake2b(src_str.encode("utf-8")).hexdigest()
if src_checksum == target_checksum:
perform_write = False
@@ -606,28 +707,3 @@ class Command(CryptMixin, BaseCommand):
if perform_copy:
target.parent.mkdir(parents=True, exist_ok=True)
copy_file_with_basic_stats(source, target)
def encrypt_secret_fields(self, manifest: dict) -> None:
"""
Encrypts certain fields in the export. Currently limited to the mail account password
"""
if self.passphrase:
self.setup_crypto(passphrase=self.passphrase)
for crypt_config in self.CRYPT_FIELDS:
exporter_key = crypt_config["exporter_key"]
crypt_fields = crypt_config["fields"]
for manifest_record in manifest[exporter_key]:
for field in crypt_fields:
if manifest_record["fields"][field]:
manifest_record["fields"][field] = self.encrypt_string(
value=manifest_record["fields"][field],
)
elif MailAccount.objects.count() > 0 or SocialToken.objects.count() > 0:
self.stdout.write(
self.style.NOTICE(
"No passphrase was given, sensitive fields will be in plaintext",
),
)

View File

@@ -40,6 +40,7 @@ def _process_and_match(work: _WorkPackage) -> _WorkResult:
class Command(PaperlessCommand):
help = "Searches for documents where the content almost matches"
supports_progress_bar = True
supports_multiprocessing = True
def add_arguments(self, parser):

View File

@@ -8,14 +8,13 @@ from pathlib import Path
from zipfile import ZipFile
from zipfile import is_zipfile
import tqdm
import ijson
from django.conf import settings
from django.contrib.auth.models import Permission
from django.contrib.auth.models import User
from django.contrib.contenttypes.models import ContentType
from django.core.exceptions import FieldDoesNotExist
from django.core.management import call_command
from django.core.management.base import BaseCommand
from django.core.management.base import CommandError
from django.core.serializers.base import DeserializationError
from django.db import IntegrityError
@@ -25,6 +24,7 @@ from django.db.models.signals import post_save
from filelock import FileLock
from documents.file_handling import create_source_path_directory
from documents.management.commands.base import PaperlessCommand
from documents.management.commands.mixins import CryptMixin
from documents.models import Correspondent
from documents.models import CustomField
@@ -33,7 +33,6 @@ from documents.models import Document
from documents.models import DocumentType
from documents.models import Note
from documents.models import Tag
from documents.parsers import run_convert
from documents.settings import EXPORTER_ARCHIVE_NAME
from documents.settings import EXPORTER_CRYPTO_SETTINGS_NAME
from documents.settings import EXPORTER_FILE_NAME
@@ -47,6 +46,15 @@ if settings.AUDIT_LOG_ENABLED:
from auditlog.registry import auditlog
def iter_manifest_records(path: Path) -> Generator[dict, None, None]:
"""Yield records one at a time from a manifest JSON array via ijson."""
try:
with path.open("rb") as f:
yield from ijson.items(f, "item")
except ijson.JSONError as e:
raise CommandError(f"Failed to parse manifest file {path}: {e}") from e
@contextmanager
def disable_signal(sig, receiver, sender, *, weak: bool | None = None) -> Generator:
try:
@@ -57,21 +65,18 @@ def disable_signal(sig, receiver, sender, *, weak: bool | None = None) -> Genera
sig.connect(receiver=receiver, sender=sender, **kwargs)
class Command(CryptMixin, BaseCommand):
class Command(CryptMixin, PaperlessCommand):
help = (
"Using a manifest.json file, load the data from there, and import the "
"documents it refers to."
)
def add_arguments(self, parser) -> None:
parser.add_argument("source")
supports_progress_bar = True
supports_multiprocessing = False
parser.add_argument(
"--no-progress-bar",
default=False,
action="store_true",
help="If set, the progress bar will not be shown",
)
def add_arguments(self, parser) -> None:
super().add_arguments(parser)
parser.add_argument("source")
parser.add_argument(
"--data-only",
@@ -147,14 +152,9 @@ class Command(CryptMixin, BaseCommand):
Loads manifest data from the various JSON files for parsing and loading the database
"""
main_manifest_path: Path = self.source / "manifest.json"
with main_manifest_path.open() as infile:
self.manifest = json.load(infile)
self.manifest_paths.append(main_manifest_path)
for file in Path(self.source).glob("**/*-manifest.json"):
with file.open() as infile:
self.manifest += json.load(infile)
self.manifest_paths.append(file)
def load_metadata(self) -> None:
@@ -231,12 +231,10 @@ class Command(CryptMixin, BaseCommand):
self.source = Path(options["source"]).resolve()
self.data_only: bool = options["data_only"]
self.no_progress_bar: bool = options["no_progress_bar"]
self.passphrase: str | None = options.get("passphrase")
self.version: str | None = None
self.salt: str | None = None
self.manifest_paths = []
self.manifest = []
# Create a temporary directory for extracting a zip file into it, even if supplied source is no zip file to keep code cleaner.
with tempfile.TemporaryDirectory() as tmp_dir:
@@ -296,6 +294,9 @@ class Command(CryptMixin, BaseCommand):
else:
self.stdout.write(self.style.NOTICE("Data only import completed"))
for tmp in getattr(self, "_decrypted_tmp_paths", []):
tmp.unlink(missing_ok=True)
self.stdout.write("Updating search index...")
call_command(
"document_index",
@@ -348,11 +349,12 @@ class Command(CryptMixin, BaseCommand):
) from e
self.stdout.write("Checking the manifest")
for record in self.manifest:
# Only check if the document files exist if this is not data only
# We don't care about documents for a data only import
if not self.data_only and record["model"] == "documents.document":
check_document_validity(record)
for manifest_path in self.manifest_paths:
for record in iter_manifest_records(manifest_path):
# Only check if the document files exist if this is not data only
# We don't care about documents for a data only import
if not self.data_only and record["model"] == "documents.document":
check_document_validity(record)
def _import_files_from_manifest(self) -> None:
settings.ORIGINALS_DIR.mkdir(parents=True, exist_ok=True)
@@ -361,23 +363,31 @@ class Command(CryptMixin, BaseCommand):
self.stdout.write("Copy files into paperless...")
manifest_documents = list(
filter(lambda r: r["model"] == "documents.document", self.manifest),
)
document_records = [
{
"pk": record["pk"],
EXPORTER_FILE_NAME: record[EXPORTER_FILE_NAME],
EXPORTER_THUMBNAIL_NAME: record.get(EXPORTER_THUMBNAIL_NAME),
EXPORTER_ARCHIVE_NAME: record.get(EXPORTER_ARCHIVE_NAME),
}
for manifest_path in self.manifest_paths
for record in iter_manifest_records(manifest_path)
if record["model"] == "documents.document"
]
for record in tqdm.tqdm(manifest_documents, disable=self.no_progress_bar):
for record in self.track(document_records, description="Copying files..."):
document = Document.objects.get(pk=record["pk"])
doc_file = record[EXPORTER_FILE_NAME]
document_path = self.source / doc_file
if EXPORTER_THUMBNAIL_NAME in record:
if record[EXPORTER_THUMBNAIL_NAME]:
thumb_file = record[EXPORTER_THUMBNAIL_NAME]
thumbnail_path = (self.source / thumb_file).resolve()
else:
thumbnail_path = None
if EXPORTER_ARCHIVE_NAME in record:
if record[EXPORTER_ARCHIVE_NAME]:
archive_file = record[EXPORTER_ARCHIVE_NAME]
archive_path = self.source / archive_file
else:
@@ -392,22 +402,10 @@ class Command(CryptMixin, BaseCommand):
copy_file_with_basic_stats(document_path, document.source_path)
if thumbnail_path:
if thumbnail_path.suffix in {".png", ".PNG"}:
run_convert(
density=300,
scale="500x5000>",
alpha="remove",
strip=True,
trim=False,
auto_orient=True,
input_file=f"{thumbnail_path}[0]",
output_file=str(document.thumbnail_path),
)
else:
copy_file_with_basic_stats(
thumbnail_path,
document.thumbnail_path,
)
copy_file_with_basic_stats(
thumbnail_path,
document.thumbnail_path,
)
if archive_path:
create_source_path_directory(document.archive_path)
@@ -418,33 +416,43 @@ class Command(CryptMixin, BaseCommand):
document.save()
def _decrypt_record_if_needed(self, record: dict) -> dict:
fields = self.CRYPT_FIELDS_BY_MODEL.get(record.get("model", ""))
if fields:
for field in fields:
if record["fields"].get(field):
record["fields"][field] = self.decrypt_string(
value=record["fields"][field],
)
return record
def decrypt_secret_fields(self) -> None:
"""
The converse decryption of some fields out of the export before importing to database
The converse decryption of some fields out of the export before importing to database.
Streams records from each manifest path and writes decrypted content to a temp file.
"""
if self.passphrase:
# Salt has been loaded from metadata.json at this point, so it cannot be None
self.setup_crypto(passphrase=self.passphrase, salt=self.salt)
had_at_least_one_record = False
for crypt_config in self.CRYPT_FIELDS:
importer_model: str = crypt_config["model_name"]
crypt_fields: str = crypt_config["fields"]
for record in filter(
lambda x: x["model"] == importer_model,
self.manifest,
):
had_at_least_one_record = True
for field in crypt_fields:
if record["fields"][field]:
record["fields"][field] = self.decrypt_string(
value=record["fields"][field],
)
if had_at_least_one_record:
# It's annoying, but the DB is loaded from the JSON directly
# Maybe could change that in the future?
(self.source / "manifest.json").write_text(
json.dumps(self.manifest, indent=2, ensure_ascii=False),
)
if not self.passphrase:
return
# Salt has been loaded from metadata.json at this point, so it cannot be None
self.setup_crypto(passphrase=self.passphrase, salt=self.salt)
self._decrypted_tmp_paths: list[Path] = []
new_paths: list[Path] = []
for manifest_path in self.manifest_paths:
tmp = manifest_path.with_name(manifest_path.stem + ".decrypted.json")
with tmp.open("w", encoding="utf-8") as out:
out.write("[\n")
first = True
for record in iter_manifest_records(manifest_path):
if not first:
out.write(",\n")
json.dump(
self._decrypt_record_if_needed(record),
out,
indent=2,
ensure_ascii=False,
)
first = False
out.write("\n]\n")
self._decrypted_tmp_paths.append(tmp)
new_paths.append(tmp)
self.manifest_paths = new_paths

View File

@@ -8,6 +8,9 @@ from documents.tasks import index_reindex
class Command(PaperlessCommand):
help = "Manages the document index."
supports_progress_bar = True
supports_multiprocessing = False
def add_arguments(self, parser):
super().add_arguments(parser)
parser.add_argument("command", choices=["reindex", "optimize"])

View File

@@ -7,6 +7,9 @@ from documents.tasks import llmindex_index
class Command(PaperlessCommand):
help = "Manages the LLM-based vector index for Paperless."
supports_progress_bar = True
supports_multiprocessing = False
def add_arguments(self, parser: Any) -> None:
super().add_arguments(parser)
parser.add_argument("command", choices=["rebuild", "update"])

View File

@@ -7,6 +7,9 @@ from documents.models import Document
class Command(PaperlessCommand):
help = "Rename all documents"
supports_progress_bar = True
supports_multiprocessing = False
def handle(self, *args, **options):
for document in self.track(Document.objects.all(), description="Renaming..."):
post_save.send(Document, instance=document, created=False)

View File

@@ -180,6 +180,9 @@ class Command(PaperlessCommand):
"modified) after their initial import."
)
supports_progress_bar = True
supports_multiprocessing = False
def add_arguments(self, parser) -> None:
super().add_arguments(parser)
parser.add_argument("-c", "--correspondent", default=False, action="store_true")

View File

@@ -24,6 +24,9 @@ _LEVEL_STYLE: dict[int, tuple[str, str]] = {
class Command(PaperlessCommand):
help = "This command checks your document archive for issues."
supports_progress_bar = True
supports_multiprocessing = False
def _render_results(self, messages: SanityCheckMessages) -> None:
"""Render sanity check results as a Rich table."""

View File

@@ -30,12 +30,14 @@ def _process_document(doc_id: int) -> None:
)
shutil.move(thumb, document.thumbnail_path)
finally:
# TODO(stumpylog): Cleanup once all parsers are handled
parser.cleanup()
class Command(PaperlessCommand):
help = "This will regenerate the thumbnails for all documents."
supports_progress_bar = True
supports_multiprocessing = True
def add_arguments(self, parser) -> None:

View File

@@ -1,22 +0,0 @@
import sys
from django.core.management.commands.loaddata import Command as LoadDataCommand
# This class is used to migrate data between databases
# That's difficult to test
class Command(LoadDataCommand): # pragma: no cover
"""
Allow the loading of data from standard in. Sourced originally from:
https://gist.github.com/bmispelon/ad5a2c333443b3a1d051 (MIT licensed)
"""
def parse_name(self, fixture_name):
self.compression_formats["stdin"] = (lambda x, y: sys.stdin, None)
if fixture_name == "-":
return "-", "json", "stdin"
def find_fixtures(self, fixture_label):
if fixture_label == "-":
return [("-", None, "-")]
return super().find_fixtures(fixture_label)

View File

@@ -1,6 +1,5 @@
import base64
import os
from argparse import ArgumentParser
from typing import TypedDict
from cryptography.fernet import Fernet
@@ -21,25 +20,6 @@ class CryptFields(TypedDict):
fields: list[str]
class ProgressBarMixin:
"""
Many commands use a progress bar, which can be disabled
via this class
"""
def add_argument_progress_bar_mixin(self, parser: ArgumentParser) -> None:
parser.add_argument(
"--no-progress-bar",
default=False,
action="store_true",
help="If set, the progress bar will not be shown",
)
def handle_progress_bar_mixin(self, *args, **options) -> None:
self.no_progress_bar = options["no_progress_bar"]
self.use_progress_bar = not self.no_progress_bar
class CryptMixin:
"""
Fully based on:
@@ -71,7 +51,7 @@ class CryptMixin:
key_size = 32
kdf_algorithm = "pbkdf2_sha256"
CRYPT_FIELDS: CryptFields = [
CRYPT_FIELDS: list[CryptFields] = [
{
"exporter_key": "mail_accounts",
"model_name": "paperless_mail.mailaccount",
@@ -89,6 +69,10 @@ class CryptMixin:
],
},
]
# O(1) lookup for per-record encryption; derived from CRYPT_FIELDS at class definition time
CRYPT_FIELDS_BY_MODEL: dict[str, list[str]] = {
cfg["model_name"]: cfg["fields"] for cfg in CRYPT_FIELDS
}
def get_crypt_params(self) -> dict[str, dict[str, str | int]]:
return {

View File

@@ -9,6 +9,9 @@ class Command(PaperlessCommand):
help = "Prunes the audit logs of objects that no longer exist."
supports_progress_bar = True
supports_multiprocessing = False
def handle(self, *args, **options):
with transaction.atomic():
for log_entry in self.track(

View File

@@ -169,7 +169,7 @@ def match_storage_paths(document: Document, classifier: DocumentClassifier, user
def matches(matching_model: MatchingModel, document: Document):
search_flags = 0
document_content = document.content
document_content = document.get_effective_content() or ""
# Check that match is not empty
if not matching_model.match.strip():

View File

@@ -5,7 +5,7 @@ from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("documents", "0003_workflowaction_order"),
("documents", "0002_squashed"),
]
operations = [

View File

@@ -1,18 +0,0 @@
# Generated by Django 5.2.9 on 2026-01-20 20:06
from django.db import migrations
from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0002_squashed"),
]
operations = [
migrations.AddField(
model_name="workflowaction",
name="order",
field=models.PositiveIntegerField(default=0, verbose_name="order"),
),
]

View File

@@ -6,7 +6,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0004_remove_document_storage_type"),
("documents", "0003_remove_document_storage_type"),
]
operations = [

View File

@@ -6,7 +6,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0005_workflowtrigger_filter_has_any_correspondents_and_more"),
("documents", "0004_workflowtrigger_filter_has_any_correspondents_and_more"),
]
operations = [

View File

@@ -7,7 +7,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0006_alter_document_checksum_unique"),
("documents", "0005_alter_document_checksum_unique"),
]
operations = [

View File

@@ -46,7 +46,7 @@ def revoke_share_link_bundle_permissions(apps, schema_editor):
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
("documents", "0007_document_content_length"),
("documents", "0006_document_content_length"),
]
operations = [

View File

@@ -6,7 +6,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0008_sharelinkbundle"),
("documents", "0007_sharelinkbundle"),
]
operations = [

View File

@@ -7,7 +7,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0009_workflowaction_passwords_alter_workflowaction_type"),
("documents", "0008_workflowaction_passwords_alter_workflowaction_type"),
]
operations = [

View File

@@ -7,7 +7,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0010_alter_document_content_length"),
("documents", "0009_alter_document_content_length"),
]
operations = [

View File

@@ -6,7 +6,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0011_optimize_integer_field_sizes"),
("documents", "0010_optimize_integer_field_sizes"),
]
operations = [

View File

@@ -7,7 +7,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0012_alter_workflowaction_type"),
("documents", "0011_alter_workflowaction_type"),
]
operations = [

View File

@@ -6,7 +6,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0013_document_root_document"),
("documents", "0012_document_root_document"),
]
operations = [

View File

@@ -124,7 +124,7 @@ def _restore_visibility_fields(apps, schema_editor):
class Migration(migrations.Migration):
dependencies = [
("documents", "0014_alter_paperlesstask_task_name"),
("documents", "0013_alter_paperlesstask_task_name"),
]
operations = [

View File

@@ -7,7 +7,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0015_savedview_visibility_to_ui_settings"),
("documents", "0014_savedview_visibility_to_ui_settings"),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]

View File

@@ -361,6 +361,42 @@ class Document(SoftDeleteModel, ModelWithOwner): # type: ignore[django-manager-
res += f" {self.title}"
return res
def get_effective_content(self) -> str | None:
"""
Returns the effective content for the document.
For root documents, this is the latest version's content when available.
For version documents, this is always the document's own content.
If the queryset already annotated ``effective_content``, that value is used.
"""
if hasattr(self, "effective_content"):
return getattr(self, "effective_content")
if self.root_document_id is not None or self.pk is None:
return self.content
prefetched_cache = getattr(self, "_prefetched_objects_cache", None)
prefetched_versions = (
prefetched_cache.get("versions")
if isinstance(prefetched_cache, dict)
else None
)
if prefetched_versions:
latest_prefetched = max(prefetched_versions, key=lambda doc: doc.id)
return latest_prefetched.content
latest_version_content = (
Document.objects.filter(root_document=self)
.order_by("-id")
.values_list("content", flat=True)
.first()
)
return (
latest_version_content
if latest_version_content is not None
else self.content
)
@property
def suggestion_content(self):
"""
@@ -373,15 +409,21 @@ class Document(SoftDeleteModel, ModelWithOwner): # type: ignore[django-manager-
This improves processing speed for large documents while keeping
enough context for accurate suggestions.
"""
if not self.content or len(self.content) <= 1200000:
return self.content
effective_content = self.get_effective_content()
if not effective_content or len(effective_content) <= 1200000:
return effective_content
else:
# Use 80% from the start and 20% from the end
# to preserve both opening and closing context.
head_len = 800000
tail_len = 200000
return " ".join((self.content[:head_len], self.content[-tail_len:]))
return " ".join(
(
effective_content[:head_len],
effective_content[-tail_len:],
),
)
@property
def source_path(self) -> Path:

View File

@@ -703,15 +703,6 @@ class StoragePathField(serializers.PrimaryKeyRelatedField):
class CustomFieldSerializer(serializers.ModelSerializer):
def __init__(self, *args, **kwargs):
context = kwargs.get("context")
self.api_version = int(
context.get("request").version
if context and context.get("request")
else settings.REST_FRAMEWORK["DEFAULT_VERSION"],
)
super().__init__(*args, **kwargs)
data_type = serializers.ChoiceField(
choices=CustomField.FieldDataType,
read_only=False,
@@ -791,38 +782,6 @@ class CustomFieldSerializer(serializers.ModelSerializer):
)
return super().validate(attrs)
def to_internal_value(self, data):
ret = super().to_internal_value(data)
if (
self.api_version < 7
and ret.get("data_type", "") == CustomField.FieldDataType.SELECT
and isinstance(ret.get("extra_data", {}).get("select_options"), list)
):
ret["extra_data"]["select_options"] = [
{
"label": option,
"id": get_random_string(length=16),
}
for option in ret["extra_data"]["select_options"]
]
return ret
def to_representation(self, instance):
ret = super().to_representation(instance)
if (
self.api_version < 7
and instance.data_type == CustomField.FieldDataType.SELECT
):
# Convert the select options with ids to a list of strings
ret["extra_data"]["select_options"] = [
option["label"] for option in ret["extra_data"]["select_options"]
]
return ret
class ReadWriteSerializerMethodField(serializers.SerializerMethodField):
"""
@@ -937,50 +896,6 @@ class CustomFieldInstanceSerializer(serializers.ModelSerializer):
return data
def get_api_version(self):
return int(
self.context.get("request").version
if self.context.get("request")
else settings.REST_FRAMEWORK["DEFAULT_VERSION"],
)
def to_internal_value(self, data):
ret = super().to_internal_value(data)
if (
self.get_api_version() < 7
and ret.get("field").data_type == CustomField.FieldDataType.SELECT
and ret.get("value") is not None
):
# Convert the index of the option in the field.extra_data["select_options"]
# list to the options unique id
ret["value"] = ret.get("field").extra_data["select_options"][ret["value"]][
"id"
]
return ret
def to_representation(self, instance):
ret = super().to_representation(instance)
if (
self.get_api_version() < 7
and instance.field.data_type == CustomField.FieldDataType.SELECT
):
# return the index of the option in the field.extra_data["select_options"] list
ret["value"] = next(
(
idx
for idx, option in enumerate(
instance.field.extra_data["select_options"],
)
if option["id"] == instance.value
),
None,
)
return ret
class Meta:
model = CustomFieldInstance
fields = [
@@ -1004,20 +919,6 @@ class NotesSerializer(serializers.ModelSerializer):
fields = ["id", "note", "created", "user"]
ordering = ["-created"]
def to_representation(self, instance):
ret = super().to_representation(instance)
request = self.context.get("request")
api_version = int(
request.version if request else settings.REST_FRAMEWORK["DEFAULT_VERSION"],
)
if api_version < 8 and "user" in ret:
user_id = ret["user"]["id"]
ret["user"] = user_id
return ret
def _get_viewable_duplicates(
document: Document,
@@ -1172,22 +1073,6 @@ class DocumentSerializer(
doc["content"] = getattr(instance, "effective_content") or ""
if self.truncate_content and "content" in self.fields:
doc["content"] = doc.get("content")[0:550]
request = self.context.get("request")
api_version = int(
request.version if request else settings.REST_FRAMEWORK["DEFAULT_VERSION"],
)
if api_version < 9 and "created" in self.fields:
# provide created as a datetime for backwards compatibility
from django.utils import timezone
doc["created"] = timezone.make_aware(
datetime.combine(
instance.created,
datetime.min.time(),
),
).isoformat()
return doc
def to_internal_value(self, data):
@@ -1440,6 +1325,124 @@ class SavedViewSerializer(OwnedObjectSerializer):
"set_permissions",
]
def _get_api_version(self) -> int:
request = self.context.get("request")
return int(
request.version if request else settings.REST_FRAMEWORK["DEFAULT_VERSION"],
)
def _update_legacy_visibility_preferences(
self,
saved_view_id: int,
*,
show_on_dashboard: bool | None,
show_in_sidebar: bool | None,
) -> UiSettings | None:
if show_on_dashboard is None and show_in_sidebar is None:
return None
request = self.context.get("request")
user = request.user if request else self.user
if user is None:
return None
ui_settings, _ = UiSettings.objects.get_or_create(
user=user,
defaults={"settings": {}},
)
current_settings = (
ui_settings.settings if isinstance(ui_settings.settings, dict) else {}
)
current_settings = dict(current_settings)
saved_views_settings = current_settings.get("saved_views")
if isinstance(saved_views_settings, dict):
saved_views_settings = dict(saved_views_settings)
else:
saved_views_settings = {}
dashboard_ids = {
int(raw_id)
for raw_id in saved_views_settings.get("dashboard_views_visible_ids", [])
if str(raw_id).isdigit()
}
sidebar_ids = {
int(raw_id)
for raw_id in saved_views_settings.get("sidebar_views_visible_ids", [])
if str(raw_id).isdigit()
}
if show_on_dashboard is not None:
if show_on_dashboard:
dashboard_ids.add(saved_view_id)
else:
dashboard_ids.discard(saved_view_id)
if show_in_sidebar is not None:
if show_in_sidebar:
sidebar_ids.add(saved_view_id)
else:
sidebar_ids.discard(saved_view_id)
saved_views_settings["dashboard_views_visible_ids"] = sorted(dashboard_ids)
saved_views_settings["sidebar_views_visible_ids"] = sorted(sidebar_ids)
current_settings["saved_views"] = saved_views_settings
ui_settings.settings = current_settings
ui_settings.save(update_fields=["settings"])
return ui_settings
def to_representation(self, instance):
# TODO: remove this and related backwards compatibility code when API v9 is dropped
ret = super().to_representation(instance)
request = self.context.get("request")
api_version = self._get_api_version()
if api_version < 10:
dashboard_ids = set()
sidebar_ids = set()
user = request.user if request else None
if user is not None and hasattr(user, "ui_settings"):
ui_settings = user.ui_settings.settings or None
saved_views = None
if isinstance(ui_settings, dict):
saved_views = ui_settings.get("saved_views", {})
if isinstance(saved_views, dict):
dashboard_ids = set(
saved_views.get("dashboard_views_visible_ids", []),
)
sidebar_ids = set(
saved_views.get("sidebar_views_visible_ids", []),
)
ret["show_on_dashboard"] = instance.id in dashboard_ids
ret["show_in_sidebar"] = instance.id in sidebar_ids
return ret
def to_internal_value(self, data):
# TODO: remove this and related backwards compatibility code when API v9 is dropped
api_version = self._get_api_version()
if api_version >= 10:
return super().to_internal_value(data)
normalized_data = data.copy()
legacy_visibility_fields = {}
boolean_field = serializers.BooleanField()
for field_name in ("show_on_dashboard", "show_in_sidebar"):
if field_name in normalized_data:
try:
legacy_visibility_fields[field_name] = (
boolean_field.to_internal_value(
normalized_data.get(field_name),
)
)
except serializers.ValidationError as exc:
raise serializers.ValidationError({field_name: exc.detail})
del normalized_data[field_name]
ret = super().to_internal_value(normalized_data)
ret.update(legacy_visibility_fields)
return ret
def validate(self, attrs):
attrs = super().validate(attrs)
if "display_fields" in attrs and attrs["display_fields"] is not None:
@@ -1459,6 +1462,9 @@ class SavedViewSerializer(OwnedObjectSerializer):
return attrs
def update(self, instance, validated_data):
request = self.context.get("request")
show_on_dashboard = validated_data.pop("show_on_dashboard", None)
show_in_sidebar = validated_data.pop("show_in_sidebar", None)
if "filter_rules" in validated_data:
rules_data = validated_data.pop("filter_rules")
else:
@@ -1480,9 +1486,19 @@ class SavedViewSerializer(OwnedObjectSerializer):
SavedViewFilterRule.objects.filter(saved_view=instance).delete()
for rule_data in rules_data:
SavedViewFilterRule.objects.create(saved_view=instance, **rule_data)
ui_settings = self._update_legacy_visibility_preferences(
instance.id,
show_on_dashboard=show_on_dashboard,
show_in_sidebar=show_in_sidebar,
)
if request is not None and ui_settings is not None:
request.user.ui_settings = ui_settings
return instance
def create(self, validated_data):
request = self.context.get("request")
show_on_dashboard = validated_data.pop("show_on_dashboard", None)
show_in_sidebar = validated_data.pop("show_in_sidebar", None)
rules_data = validated_data.pop("filter_rules")
if "user" in validated_data:
# backwards compatibility
@@ -1490,6 +1506,13 @@ class SavedViewSerializer(OwnedObjectSerializer):
saved_view = super().create(validated_data)
for rule_data in rules_data:
SavedViewFilterRule.objects.create(saved_view=saved_view, **rule_data)
ui_settings = self._update_legacy_visibility_preferences(
saved_view.id,
show_on_dashboard=show_on_dashboard,
show_in_sidebar=show_in_sidebar,
)
if request is not None and ui_settings is not None:
request.user.ui_settings = ui_settings
return saved_view
@@ -1517,11 +1540,124 @@ class DocumentListSerializer(serializers.Serializer):
return documents
class SourceModeValidationMixin:
def validate_source_mode(self, source_mode: str) -> str:
if source_mode not in bulk_edit.SourceModeChoices.__dict__.values():
raise serializers.ValidationError("Invalid source_mode")
return source_mode
class RotateDocumentsSerializer(DocumentListSerializer, SourceModeValidationMixin):
degrees = serializers.IntegerField(required=True)
source_mode = serializers.CharField(
required=False,
default=bulk_edit.SourceModeChoices.LATEST_VERSION,
)
class MergeDocumentsSerializer(DocumentListSerializer, SourceModeValidationMixin):
metadata_document_id = serializers.IntegerField(
required=False,
allow_null=True,
)
delete_originals = serializers.BooleanField(required=False, default=False)
archive_fallback = serializers.BooleanField(required=False, default=False)
source_mode = serializers.CharField(
required=False,
default=bulk_edit.SourceModeChoices.LATEST_VERSION,
)
class EditPdfDocumentsSerializer(DocumentListSerializer, SourceModeValidationMixin):
operations = serializers.ListField(required=True)
delete_original = serializers.BooleanField(required=False, default=False)
update_document = serializers.BooleanField(required=False, default=False)
include_metadata = serializers.BooleanField(required=False, default=True)
source_mode = serializers.CharField(
required=False,
default=bulk_edit.SourceModeChoices.LATEST_VERSION,
)
def validate(self, attrs):
documents = attrs["documents"]
if len(documents) > 1:
raise serializers.ValidationError(
"Edit PDF method only supports one document",
)
operations = attrs["operations"]
if not isinstance(operations, list):
raise serializers.ValidationError("operations must be a list")
for op in operations:
if not isinstance(op, dict):
raise serializers.ValidationError("invalid operation entry")
if "page" not in op or not isinstance(op["page"], int):
raise serializers.ValidationError("page must be an integer")
if "rotate" in op and not isinstance(op["rotate"], int):
raise serializers.ValidationError("rotate must be an integer")
if "doc" in op and not isinstance(op["doc"], int):
raise serializers.ValidationError("doc must be an integer")
if attrs["update_document"]:
max_idx = max(op.get("doc", 0) for op in operations)
if max_idx > 0:
raise serializers.ValidationError(
"update_document only allowed with a single output document",
)
doc = Document.objects.get(id=documents[0])
if doc.page_count:
for op in operations:
if op["page"] < 1 or op["page"] > doc.page_count:
raise serializers.ValidationError(
f"Page {op['page']} is out of bounds for document with {doc.page_count} pages.",
)
return attrs
class RemovePasswordDocumentsSerializer(
DocumentListSerializer,
SourceModeValidationMixin,
):
password = serializers.CharField(required=True)
update_document = serializers.BooleanField(required=False, default=False)
delete_original = serializers.BooleanField(required=False, default=False)
include_metadata = serializers.BooleanField(required=False, default=True)
source_mode = serializers.CharField(
required=False,
default=bulk_edit.SourceModeChoices.LATEST_VERSION,
)
class DeleteDocumentsSerializer(DocumentListSerializer):
pass
class ReprocessDocumentsSerializer(DocumentListSerializer):
pass
class BulkEditSerializer(
SerializerWithPerms,
DocumentListSerializer,
SetPermissionsMixin,
SourceModeValidationMixin,
):
# TODO: remove this and related backwards compatibility code when API v9 is dropped
# split, delete_pages can be removed entirely
MOVED_DOCUMENT_ACTION_ENDPOINTS = {
"delete": "/api/documents/delete/",
"reprocess": "/api/documents/reprocess/",
"rotate": "/api/documents/rotate/",
"merge": "/api/documents/merge/",
"edit_pdf": "/api/documents/edit_pdf/",
"remove_password": "/api/documents/remove_password/",
"split": "/api/documents/edit_pdf/",
"delete_pages": "/api/documents/edit_pdf/",
}
LEGACY_DOCUMENT_ACTION_METHODS = tuple(MOVED_DOCUMENT_ACTION_ENDPOINTS.keys())
method = serializers.ChoiceField(
choices=[
"set_correspondent",
@@ -1531,15 +1667,8 @@ class BulkEditSerializer(
"remove_tag",
"modify_tags",
"modify_custom_fields",
"delete",
"reprocess",
"set_permissions",
"rotate",
"merge",
"split",
"delete_pages",
"edit_pdf",
"remove_password",
*LEGACY_DOCUMENT_ACTION_METHODS,
],
label="Method",
write_only=True,
@@ -1617,8 +1746,7 @@ class BulkEditSerializer(
return bulk_edit.edit_pdf
elif method == "remove_password":
return bulk_edit.remove_password
else: # pragma: no cover
# This will never happen as it is handled by the ChoiceField
else:
raise serializers.ValidationError("Unsupported method.")
def _validate_parameters_tags(self, parameters) -> None:
@@ -1723,6 +1851,13 @@ class BulkEditSerializer(
except ValueError:
raise serializers.ValidationError("invalid rotation degrees")
def _validate_source_mode(self, parameters) -> None:
source_mode = parameters.get(
"source_mode",
bulk_edit.SourceModeChoices.LATEST_VERSION,
)
parameters["source_mode"] = self.validate_source_mode(source_mode)
def _validate_parameters_split(self, parameters) -> None:
if "pages" not in parameters:
raise serializers.ValidationError("pages not specified")
@@ -1823,6 +1958,9 @@ class BulkEditSerializer(
method = attrs["method"]
parameters = attrs["parameters"]
if "source_mode" in parameters:
self._validate_source_mode(parameters)
if method == bulk_edit.set_correspondent:
self._validate_parameters_correspondent(parameters)
elif method == bulk_edit.set_document_type:

View File

@@ -399,6 +399,7 @@ def update_document_content_maybe_archive_file(document_id) -> None:
f"Error while parsing document {document} (ID: {document_id})",
)
finally:
# TODO(stumpylog): Cleanup once all parsers are handled
parser.cleanup()

View File

@@ -422,6 +422,34 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
self.assertEqual(args[0], [self.doc1.id])
self.assertEqual(len(kwargs), 0)
@mock.patch("documents.views.bulk_edit.delete")
def test_delete_documents_endpoint(self, m) -> None:
self.setup_mock(m, "delete")
response = self.client.post(
"/api/documents/delete/",
json.dumps({"documents": [self.doc1.id]}),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
m.assert_called_once()
args, kwargs = m.call_args
self.assertEqual(args[0], [self.doc1.id])
self.assertEqual(len(kwargs), 0)
@mock.patch("documents.views.bulk_edit.reprocess")
def test_reprocess_documents_endpoint(self, m) -> None:
self.setup_mock(m, "reprocess")
response = self.client.post(
"/api/documents/reprocess/",
json.dumps({"documents": [self.doc1.id]}),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
m.assert_called_once()
args, kwargs = m.call_args
self.assertEqual(args[0], [self.doc1.id])
self.assertEqual(len(kwargs), 0)
@mock.patch("documents.serialisers.bulk_edit.set_storage_path")
def test_api_set_storage_path(self, m) -> None:
"""
@@ -877,7 +905,7 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
self.assertEqual(kwargs["merge"], True)
@mock.patch("documents.serialisers.bulk_edit.set_storage_path")
@mock.patch("documents.serialisers.bulk_edit.merge")
@mock.patch("documents.views.bulk_edit.merge")
def test_insufficient_global_perms(self, mock_merge, mock_set_storage) -> None:
"""
GIVEN:
@@ -912,12 +940,11 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
mock_set_storage.assert_not_called()
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/merge/",
json.dumps(
{
"documents": [self.doc1.id],
"method": "merge",
"parameters": {"metadata_document_id": self.doc1.id},
"metadata_document_id": self.doc1.id,
},
),
content_type="application/json",
@@ -927,15 +954,12 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
mock_merge.assert_not_called()
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/merge/",
json.dumps(
{
"documents": [self.doc1.id],
"method": "merge",
"parameters": {
"metadata_document_id": self.doc1.id,
"delete_originals": True,
},
"metadata_document_id": self.doc1.id,
"delete_originals": True,
},
),
content_type="application/json",
@@ -1052,85 +1076,57 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
m.assert_called_once()
@mock.patch("documents.serialisers.bulk_edit.rotate")
@mock.patch("documents.views.bulk_edit.rotate")
def test_rotate(self, m) -> None:
self.setup_mock(m, "rotate")
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/rotate/",
json.dumps(
{
"documents": [self.doc2.id, self.doc3.id],
"method": "rotate",
"parameters": {"degrees": 90},
"degrees": 90,
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
m.assert_called_once()
args, kwargs = m.call_args
self.assertCountEqual(args[0], [self.doc2.id, self.doc3.id])
self.assertEqual(kwargs["degrees"], 90)
@mock.patch("documents.serialisers.bulk_edit.rotate")
def test_rotate_invalid_params(self, m) -> None:
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id, self.doc3.id],
"method": "rotate",
"parameters": {"degrees": "foo"},
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id, self.doc3.id],
"method": "rotate",
"parameters": {"degrees": 90.5},
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
m.assert_not_called()
@mock.patch("documents.serialisers.bulk_edit.merge")
def test_merge(self, m) -> None:
self.setup_mock(m, "merge")
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id, self.doc3.id],
"method": "merge",
"parameters": {"metadata_document_id": self.doc3.id},
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
m.assert_called_once()
args, kwargs = m.call_args
self.assertCountEqual(args[0], [self.doc2.id, self.doc3.id])
self.assertEqual(kwargs["metadata_document_id"], self.doc3.id)
self.assertEqual(kwargs["source_mode"], "latest_version")
self.assertEqual(kwargs["user"], self.user)
@mock.patch("documents.serialisers.bulk_edit.merge")
def test_merge_and_delete_insufficient_permissions(self, m) -> None:
@mock.patch("documents.views.bulk_edit.rotate")
def test_rotate_invalid_params(self, m) -> None:
response = self.client.post(
"/api/documents/rotate/",
json.dumps(
{
"documents": [self.doc2.id, self.doc3.id],
"degrees": "foo",
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
response = self.client.post(
"/api/documents/rotate/",
json.dumps(
{
"documents": [self.doc2.id, self.doc3.id],
"degrees": 90.5,
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
m.assert_not_called()
@mock.patch("documents.views.bulk_edit.rotate")
def test_rotate_insufficient_permissions(self, m) -> None:
self.doc1.owner = User.objects.get(username="temp_admin")
self.doc1.save()
user1 = User.objects.create(username="user1")
@@ -1138,17 +1134,13 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
user1.save()
self.client.force_authenticate(user=user1)
self.setup_mock(m, "merge")
self.setup_mock(m, "rotate")
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/rotate/",
json.dumps(
{
"documents": [self.doc1.id, self.doc2.id],
"method": "merge",
"parameters": {
"metadata_document_id": self.doc2.id,
"delete_originals": True,
},
"degrees": 90,
},
),
content_type="application/json",
@@ -1159,15 +1151,11 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
self.assertEqual(response.content, b"Insufficient permissions")
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/rotate/",
json.dumps(
{
"documents": [self.doc2.id, self.doc3.id],
"method": "merge",
"parameters": {
"metadata_document_id": self.doc2.id,
"delete_originals": True,
},
"degrees": 90,
},
),
content_type="application/json",
@@ -1176,27 +1164,78 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
self.assertEqual(response.status_code, status.HTTP_200_OK)
m.assert_called_once()
@mock.patch("documents.serialisers.bulk_edit.merge")
def test_merge_invalid_parameters(self, m) -> None:
"""
GIVEN:
- API data for merging documents is called
- The parameters are invalid
WHEN:
- API is called
THEN:
- The API fails with a correct error code
"""
@mock.patch("documents.views.bulk_edit.merge")
def test_merge(self, m) -> None:
self.setup_mock(m, "merge")
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/merge/",
json.dumps(
{
"documents": [self.doc2.id, self.doc3.id],
"metadata_document_id": self.doc3.id,
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
m.assert_called_once()
args, kwargs = m.call_args
self.assertCountEqual(args[0], [self.doc2.id, self.doc3.id])
self.assertEqual(kwargs["metadata_document_id"], self.doc3.id)
self.assertEqual(kwargs["source_mode"], "latest_version")
self.assertEqual(kwargs["user"], self.user)
@mock.patch("documents.views.bulk_edit.merge")
def test_merge_and_delete_insufficient_permissions(self, m) -> None:
self.doc1.owner = User.objects.get(username="temp_admin")
self.doc1.save()
user1 = User.objects.create(username="user1")
user1.user_permissions.add(*Permission.objects.all())
user1.save()
self.client.force_authenticate(user=user1)
self.setup_mock(m, "merge")
response = self.client.post(
"/api/documents/merge/",
json.dumps(
{
"documents": [self.doc1.id, self.doc2.id],
"method": "merge",
"parameters": {
"delete_originals": "not_boolean",
},
"metadata_document_id": self.doc2.id,
"delete_originals": True,
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
m.assert_not_called()
self.assertEqual(response.content, b"Insufficient permissions")
response = self.client.post(
"/api/documents/merge/",
json.dumps(
{
"documents": [self.doc2.id, self.doc3.id],
"metadata_document_id": self.doc2.id,
"delete_originals": True,
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
m.assert_called_once()
@mock.patch("documents.views.bulk_edit.merge")
def test_merge_invalid_parameters(self, m) -> None:
self.setup_mock(m, "merge")
response = self.client.post(
"/api/documents/merge/",
json.dumps(
{
"documents": [self.doc1.id, self.doc2.id],
"delete_originals": "not_boolean",
},
),
content_type="application/json",
@@ -1205,219 +1244,81 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
m.assert_not_called()
@mock.patch("documents.serialisers.bulk_edit.split")
def test_split(self, m) -> None:
self.setup_mock(m, "split")
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "split",
"parameters": {"pages": "1,2-4,5-6,7"},
},
),
content_type="application/json",
)
def test_bulk_edit_allows_legacy_file_methods_with_warning(self) -> None:
method_payloads = {
"delete": {},
"reprocess": {},
"rotate": {"degrees": 90},
"merge": {"metadata_document_id": self.doc2.id},
"edit_pdf": {"operations": [{"page": 1}]},
"remove_password": {"password": "secret"},
"split": {"pages": "1,2-4"},
"delete_pages": {"pages": [1, 2]},
}
self.assertEqual(response.status_code, status.HTTP_200_OK)
for version in (9, 10):
for method, parameters in method_payloads.items():
with self.subTest(method=method, version=version):
with mock.patch(
f"documents.views.bulk_edit.{method}",
) as mocked_method:
self.setup_mock(mocked_method, method)
with self.assertLogs("paperless.api", level="WARNING") as logs:
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id],
"method": method,
"parameters": parameters,
},
),
content_type="application/json",
headers={
"Accept": f"application/json; version={version}",
},
)
m.assert_called_once()
args, kwargs = m.call_args
self.assertCountEqual(args[0], [self.doc2.id])
self.assertEqual(kwargs["pages"], [[1], [2, 3, 4], [5, 6], [7]])
self.assertEqual(kwargs["user"], self.user)
self.assertEqual(response.status_code, status.HTTP_200_OK)
mocked_method.assert_called_once()
self.assertTrue(
any(
"Deprecated bulk_edit method" in entry
and f"'{method}'" in entry
for entry in logs.output
),
)
def test_split_invalid_params(self) -> None:
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "split",
"parameters": {}, # pages not specified
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"pages not specified", response.content)
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "split",
"parameters": {"pages": "1:7"}, # wrong format
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"invalid pages specified", response.content)
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [
self.doc1.id,
self.doc2.id,
], # only one document supported
"method": "split",
"parameters": {"pages": "1-2,3-7"}, # wrong format
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"Split method only supports one document", response.content)
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "split",
"parameters": {
"pages": "1",
"delete_originals": "notabool",
}, # not a bool
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"delete_originals must be a boolean", response.content)
@mock.patch("documents.serialisers.bulk_edit.delete_pages")
def test_delete_pages(self, m) -> None:
self.setup_mock(m, "delete_pages")
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "delete_pages",
"parameters": {"pages": [1, 2, 3, 4]},
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
m.assert_called_once()
args, kwargs = m.call_args
self.assertCountEqual(args[0], [self.doc2.id])
self.assertEqual(kwargs["pages"], [1, 2, 3, 4])
def test_delete_pages_invalid_params(self) -> None:
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [
self.doc1.id,
self.doc2.id,
], # only one document supported
"method": "delete_pages",
"parameters": {
"pages": [1, 2, 3, 4],
},
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(
b"Delete pages method only supports one document",
response.content,
)
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "delete_pages",
"parameters": {}, # pages not specified
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"pages not specified", response.content)
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "delete_pages",
"parameters": {"pages": "1-3"}, # not a list
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"pages must be a list", response.content)
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "delete_pages",
"parameters": {"pages": ["1-3"]}, # not ints
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"pages must be a list of integers", response.content)
@mock.patch("documents.serialisers.bulk_edit.edit_pdf")
@mock.patch("documents.views.bulk_edit.edit_pdf")
def test_edit_pdf(self, m) -> None:
self.setup_mock(m, "edit_pdf")
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/edit_pdf/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "edit_pdf",
"parameters": {"operations": [{"page": 1}]},
"operations": [{"page": 1}],
"source_mode": "explicit_selection",
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
m.assert_called_once()
args, kwargs = m.call_args
self.assertCountEqual(args[0], [self.doc2.id])
self.assertEqual(kwargs["operations"], [{"page": 1}])
self.assertEqual(kwargs["source_mode"], "explicit_selection")
self.assertEqual(kwargs["user"], self.user)
def test_edit_pdf_invalid_params(self) -> None:
# multiple documents
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/edit_pdf/",
json.dumps(
{
"documents": [self.doc2.id, self.doc3.id],
"method": "edit_pdf",
"parameters": {"operations": [{"page": 1}]},
"operations": [{"page": 1}],
},
),
content_type="application/json",
@@ -1425,44 +1326,25 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"Edit PDF method only supports one document", response.content)
# no operations specified
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/edit_pdf/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "edit_pdf",
"parameters": {},
"operations": "not_a_list",
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"operations not specified", response.content)
self.assertIn(b"Expected a list of items", response.content)
# operations not a list
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/edit_pdf/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "edit_pdf",
"parameters": {"operations": "not_a_list"},
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"operations must be a list", response.content)
# invalid operation
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "edit_pdf",
"parameters": {"operations": ["invalid_operation"]},
"operations": ["invalid_operation"],
},
),
content_type="application/json",
@@ -1470,14 +1352,12 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"invalid operation entry", response.content)
# page not an int
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/edit_pdf/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "edit_pdf",
"parameters": {"operations": [{"page": "not_an_int"}]},
"operations": [{"page": "not_an_int"}],
},
),
content_type="application/json",
@@ -1485,14 +1365,12 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"page must be an integer", response.content)
# rotate not an int
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/edit_pdf/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "edit_pdf",
"parameters": {"operations": [{"page": 1, "rotate": "not_an_int"}]},
"operations": [{"page": 1, "rotate": "not_an_int"}],
},
),
content_type="application/json",
@@ -1500,14 +1378,12 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"rotate must be an integer", response.content)
# doc not an int
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/edit_pdf/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "edit_pdf",
"parameters": {"operations": [{"page": 1, "doc": "not_an_int"}]},
"operations": [{"page": 1, "doc": "not_an_int"}],
},
),
content_type="application/json",
@@ -1515,53 +1391,13 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"doc must be an integer", response.content)
# update_document not a boolean
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/edit_pdf/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "edit_pdf",
"parameters": {
"update_document": "not_a_bool",
"operations": [{"page": 1}],
},
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"update_document must be a boolean", response.content)
# include_metadata not a boolean
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "edit_pdf",
"parameters": {
"include_metadata": "not_a_bool",
"operations": [{"page": 1}],
},
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"include_metadata must be a boolean", response.content)
# update_document True but output would be multiple documents
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "edit_pdf",
"parameters": {
"update_document": True,
"operations": [{"page": 1, "doc": 1}, {"page": 2, "doc": 2}],
},
"update_document": True,
"operations": [{"page": 1, "doc": 1}, {"page": 2, "doc": 2}],
},
),
content_type="application/json",
@@ -1572,42 +1408,84 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
response.content,
)
@mock.patch("documents.serialisers.bulk_edit.edit_pdf")
def test_edit_pdf_page_out_of_bounds(self, m) -> None:
"""
GIVEN:
- API data for editing PDF is called
- The page number is out of bounds
WHEN:
- API is called
THEN:
- The API fails with a correct error code
"""
self.setup_mock(m, "edit_pdf")
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/edit_pdf/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "edit_pdf",
"parameters": {"operations": [{"page": 99}]},
"operations": [{"page": 1}],
"source_mode": "not_a_mode",
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"Invalid source_mode", response.content)
@mock.patch("documents.views.bulk_edit.edit_pdf")
def test_edit_pdf_page_out_of_bounds(self, m) -> None:
self.setup_mock(m, "edit_pdf")
response = self.client.post(
"/api/documents/edit_pdf/",
json.dumps(
{
"documents": [self.doc2.id],
"operations": [{"page": 99}],
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"out of bounds", response.content)
m.assert_not_called()
@mock.patch("documents.serialisers.bulk_edit.remove_password")
def test_remove_password(self, m) -> None:
self.setup_mock(m, "remove_password")
@mock.patch("documents.views.bulk_edit.edit_pdf")
def test_edit_pdf_insufficient_permissions(self, m) -> None:
self.doc1.owner = User.objects.get(username="temp_admin")
self.doc1.save()
user1 = User.objects.create(username="user1")
user1.user_permissions.add(*Permission.objects.all())
user1.save()
self.client.force_authenticate(user=user1)
self.setup_mock(m, "edit_pdf")
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/edit_pdf/",
json.dumps(
{
"documents": [self.doc1.id],
"operations": [{"page": 1}],
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
m.assert_not_called()
self.assertEqual(response.content, b"Insufficient permissions")
response = self.client.post(
"/api/documents/edit_pdf/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "remove_password",
"parameters": {"password": "secret", "update_document": True},
"operations": [{"page": 1}],
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
m.assert_called_once()
@mock.patch("documents.views.bulk_edit.remove_password")
def test_remove_password(self, m) -> None:
self.setup_mock(m, "remove_password")
response = self.client.post(
"/api/documents/remove_password/",
json.dumps(
{
"documents": [self.doc2.id],
"password": "secret",
"update_document": True,
},
),
content_type="application/json",
@@ -1619,36 +1497,69 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
self.assertCountEqual(args[0], [self.doc2.id])
self.assertEqual(kwargs["password"], "secret")
self.assertTrue(kwargs["update_document"])
self.assertEqual(kwargs["source_mode"], "latest_version")
self.assertEqual(kwargs["user"], self.user)
def test_remove_password_invalid_params(self) -> None:
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/remove_password/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "remove_password",
"parameters": {},
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"password not specified", response.content)
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/remove_password/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "remove_password",
"parameters": {"password": 123},
"password": 123,
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"password must be a string", response.content)
@mock.patch("documents.views.bulk_edit.remove_password")
def test_remove_password_insufficient_permissions(self, m) -> None:
self.doc1.owner = User.objects.get(username="temp_admin")
self.doc1.save()
user1 = User.objects.create(username="user1")
user1.user_permissions.add(*Permission.objects.all())
user1.save()
self.client.force_authenticate(user=user1)
self.setup_mock(m, "remove_password")
response = self.client.post(
"/api/documents/remove_password/",
json.dumps(
{
"documents": [self.doc1.id],
"password": "secret",
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
m.assert_not_called()
self.assertEqual(response.content, b"Insufficient permissions")
response = self.client.post(
"/api/documents/remove_password/",
json.dumps(
{
"documents": [self.doc2.id],
"password": "secret",
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
m.assert_called_once()
@override_settings(AUDIT_LOG_ENABLED=True)
def test_bulk_edit_audit_log_enabled_simple_field(self) -> None:

View File

@@ -323,113 +323,6 @@ class TestCustomFieldsAPI(DirectoriesMixin, APITestCase):
mock_delay.assert_called_once_with(cf_select)
def test_custom_field_select_old_version(self) -> None:
"""
GIVEN:
- Nothing
WHEN:
- API post request is made for custom fields with api version header < 7
- API get request is made for custom fields with api version header < 7
THEN:
- The select options are created with unique ids
- The select options are returned in the old format
"""
resp = self.client.post(
self.ENDPOINT,
headers={"Accept": "application/json; version=6"},
data=json.dumps(
{
"data_type": "select",
"name": "Select Field",
"extra_data": {
"select_options": [
"Option 1",
"Option 2",
],
},
},
),
content_type="application/json",
)
self.assertEqual(resp.status_code, status.HTTP_201_CREATED)
field = CustomField.objects.get(name="Select Field")
self.assertEqual(
field.extra_data["select_options"],
[
{"label": "Option 1", "id": ANY},
{"label": "Option 2", "id": ANY},
],
)
resp = self.client.get(
f"{self.ENDPOINT}{field.id}/",
headers={"Accept": "application/json; version=6"},
)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
data = resp.json()
self.assertEqual(
data["extra_data"]["select_options"],
[
"Option 1",
"Option 2",
],
)
def test_custom_field_select_value_old_version(self) -> None:
"""
GIVEN:
- Existing document with custom field select
WHEN:
- API post request is made to add the field for document with api version header < 7
- API get request is made for document with api version header < 7
THEN:
- The select value is returned in the old format, the index of the option
"""
custom_field_select = CustomField.objects.create(
name="Select Field",
data_type=CustomField.FieldDataType.SELECT,
extra_data={
"select_options": [
{"label": "Option 1", "id": "abc-123"},
{"label": "Option 2", "id": "def-456"},
],
},
)
doc = Document.objects.create(
title="WOW",
content="the content",
checksum="123",
mime_type="application/pdf",
)
resp = self.client.patch(
f"/api/documents/{doc.id}/",
headers={"Accept": "application/json; version=6"},
data=json.dumps(
{
"custom_fields": [
{"field": custom_field_select.id, "value": 1},
],
},
),
content_type="application/json",
)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
doc.refresh_from_db()
self.assertEqual(doc.custom_fields.first().value, "def-456")
resp = self.client.get(
f"/api/documents/{doc.id}/",
headers={"Accept": "application/json; version=6"},
)
self.assertEqual(resp.status_code, status.HTTP_200_OK)
data = resp.json()
self.assertEqual(data["custom_fields"][0]["value"], 1)
def test_create_custom_field_monetary_validation(self) -> None:
"""
GIVEN:

View File

@@ -41,6 +41,7 @@ from documents.models import SavedView
from documents.models import ShareLink
from documents.models import StoragePath
from documents.models import Tag
from documents.models import UiSettings
from documents.models import Workflow
from documents.models import WorkflowAction
from documents.models import WorkflowTrigger
@@ -176,7 +177,7 @@ class TestDocumentApi(DirectoriesMixin, DocumentConsumeDelayMixin, APITestCase):
results = response.data["results"]
self.assertEqual(len(results[0]), 0)
def test_document_fields_api_version_8_respects_created(self) -> None:
def test_document_fields_respects_created(self) -> None:
Document.objects.create(
title="legacy",
checksum="123",
@@ -186,7 +187,6 @@ class TestDocumentApi(DirectoriesMixin, DocumentConsumeDelayMixin, APITestCase):
response = self.client.get(
"/api/documents/?fields=id",
headers={"Accept": "application/json; version=8"},
format="json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
@@ -196,25 +196,22 @@ class TestDocumentApi(DirectoriesMixin, DocumentConsumeDelayMixin, APITestCase):
response = self.client.get(
"/api/documents/?fields=id,created",
headers={"Accept": "application/json; version=8"},
format="json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
results = response.data["results"]
self.assertIn("id", results[0])
self.assertIn("created", results[0])
self.assertRegex(results[0]["created"], r"^2024-01-15T00:00:00.*$")
self.assertEqual(results[0]["created"], "2024-01-15")
def test_document_legacy_created_format(self) -> None:
def test_document_created_format(self) -> None:
"""
GIVEN:
- Existing document
WHEN:
- Document is requested with api version ≥ 9
- Document is requested with api version < 9
- Document is requested
THEN:
- Document created field is returned as date
- Document created field is returned as datetime
"""
doc = Document.objects.create(
title="none",
@@ -225,14 +222,6 @@ class TestDocumentApi(DirectoriesMixin, DocumentConsumeDelayMixin, APITestCase):
response = self.client.get(
f"/api/documents/{doc.pk}/",
headers={"Accept": "application/json; version=8"},
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertRegex(response.data["created"], r"^2023-01-01T00:00:00.*$")
response = self.client.get(
f"/api/documents/{doc.pk}/",
headers={"Accept": "application/json; version=9"},
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data["created"], "2023-01-01")
@@ -2200,6 +2189,205 @@ class TestDocumentApi(DirectoriesMixin, DocumentConsumeDelayMixin, APITestCase):
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertEqual(response.data["count"], 0)
def test_saved_view_api_version_backward_compatibility(self) -> None:
"""
GIVEN:
- Saved views and UiSettings with visibility preferences
WHEN:
- API request with version=9 (legacy)
- API request with version=10 (current)
THEN:
- Version 9 returns show_on_dashboard and show_in_sidebar from UiSettings
- Version 10 omits these fields (moved to UiSettings)
"""
v1 = SavedView.objects.create(
owner=self.user,
name="dashboard_view",
sort_field="created",
)
v2 = SavedView.objects.create(
owner=self.user,
name="sidebar_view",
sort_field="created",
)
v3 = SavedView.objects.create(
owner=self.user,
name="hidden_view",
sort_field="created",
)
UiSettings.objects.update_or_create(
user=self.user,
defaults={
"settings": {
"saved_views": {
"dashboard_views_visible_ids": [v1.id],
"sidebar_views_visible_ids": [v2.id],
},
},
},
)
response_v9 = self.client.get(
"/api/saved_views/",
headers={"Accept": "application/json; version=9"},
format="json",
)
self.assertEqual(response_v9.status_code, status.HTTP_200_OK)
results_v9 = {r["id"]: r for r in response_v9.data["results"]}
self.assertIn("show_on_dashboard", results_v9[v1.id])
self.assertIn("show_in_sidebar", results_v9[v1.id])
self.assertTrue(results_v9[v1.id]["show_on_dashboard"])
self.assertFalse(results_v9[v1.id]["show_in_sidebar"])
self.assertTrue(results_v9[v2.id]["show_in_sidebar"])
self.assertFalse(results_v9[v2.id]["show_on_dashboard"])
self.assertFalse(results_v9[v3.id]["show_on_dashboard"])
self.assertFalse(results_v9[v3.id]["show_in_sidebar"])
response_v10 = self.client.get(
"/api/saved_views/",
headers={"Accept": "application/json; version=10"},
format="json",
)
self.assertEqual(response_v10.status_code, status.HTTP_200_OK)
results_v10 = {r["id"]: r for r in response_v10.data["results"]}
self.assertNotIn("show_on_dashboard", results_v10[v1.id])
self.assertNotIn("show_in_sidebar", results_v10[v1.id])
def test_saved_view_api_version_9_user_without_ui_settings(self) -> None:
"""
GIVEN:
- User with no UiSettings and a saved view
WHEN:
- API request with version=9
THEN:
- show_on_dashboard and show_in_sidebar are False (default)
"""
SavedView.objects.create(
owner=self.user,
name="test_view",
sort_field="created",
)
UiSettings.objects.filter(user=self.user).delete()
response = self.client.get(
"/api/saved_views/",
headers={"Accept": "application/json; version=9"},
format="json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
result = response.data["results"][0]
self.assertFalse(result["show_on_dashboard"])
self.assertFalse(result["show_in_sidebar"])
def test_saved_view_api_version_9_create_writes_visibility_to_ui_settings(
self,
) -> None:
"""
GIVEN:
- No UiSettings for the current user
WHEN:
- A saved view is created through API version 9 with visibility flags
THEN:
- Visibility is persisted in UiSettings.saved_views
"""
UiSettings.objects.filter(user=self.user).delete()
response = self.client.post(
"/api/saved_views/",
{
"name": "legacy-v9-create",
"sort_field": "created",
"filter_rules": [],
"show_on_dashboard": True,
"show_in_sidebar": False,
},
headers={"Accept": "application/json; version=9"},
format="json",
)
self.assertEqual(response.status_code, status.HTTP_201_CREATED)
self.assertTrue(response.data["show_on_dashboard"])
self.assertFalse(response.data["show_in_sidebar"])
self.user.refresh_from_db()
self.assertTrue(hasattr(self.user, "ui_settings"))
saved_view_settings = self.user.ui_settings.settings["saved_views"]
self.assertListEqual(
saved_view_settings["dashboard_views_visible_ids"],
[response.data["id"]],
)
self.assertListEqual(saved_view_settings["sidebar_views_visible_ids"], [])
def test_saved_view_api_version_9_patch_writes_visibility_to_ui_settings(
self,
) -> None:
"""
GIVEN:
- Existing saved views and UiSettings visibility ids
WHEN:
- A saved view is updated through API version 9 visibility flags
THEN:
- The per-user UiSettings visibility ids are updated
"""
v1 = SavedView.objects.create(
owner=self.user,
name="legacy-v9-patch-1",
sort_field="created",
)
v2 = SavedView.objects.create(
owner=self.user,
name="legacy-v9-patch-2",
sort_field="created",
)
UiSettings.objects.update_or_create(
user=self.user,
defaults={
"settings": {
"saved_views": {
"dashboard_views_visible_ids": [v1.id],
"sidebar_views_visible_ids": [v1.id, v2.id],
},
},
},
)
response = self.client.patch(
f"/api/saved_views/{v1.id}/",
{
"show_on_dashboard": False,
},
headers={"Accept": "application/json; version=9"},
format="json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertFalse(response.data["show_on_dashboard"])
self.assertTrue(response.data["show_in_sidebar"])
self.user.refresh_from_db()
saved_view_settings = self.user.ui_settings.settings["saved_views"]
self.assertListEqual(saved_view_settings["dashboard_views_visible_ids"], [])
self.assertListEqual(
saved_view_settings["sidebar_views_visible_ids"],
[v1.id, v2.id],
)
response = self.client.patch(
f"/api/saved_views/{v1.id}/",
{
"show_in_sidebar": False,
},
headers={"Accept": "application/json; version=9"},
format="json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
self.assertFalse(response.data["show_on_dashboard"])
self.assertFalse(response.data["show_in_sidebar"])
self.user.refresh_from_db()
saved_view_settings = self.user.ui_settings.settings["saved_views"]
self.assertListEqual(saved_view_settings["dashboard_views_visible_ids"], [])
self.assertListEqual(saved_view_settings["sidebar_views_visible_ids"], [v2.id])
def test_saved_view_create_update_patch(self) -> None:
User.objects.create_user("user1")
@@ -2603,26 +2791,6 @@ class TestDocumentApi(DirectoriesMixin, DocumentConsumeDelayMixin, APITestCase):
},
)
def test_docnote_serializer_v7(self) -> None:
doc = Document.objects.create(
title="test",
mime_type="application/pdf",
content="this is a document which will have notes!",
)
Note.objects.create(
note="This is a note.",
document=doc,
user=self.user,
)
self.assertEqual(
self.client.get(
f"/api/documents/{doc.pk}/",
headers={"Accept": "application/json; version=7"},
format="json",
).data["notes"][0]["user"],
self.user.id,
)
def test_create_note(self) -> None:
"""
GIVEN:
@@ -3391,14 +3559,13 @@ class TestDocumentApi(DirectoriesMixin, DocumentConsumeDelayMixin, APITestCase):
)
class TestDocumentApiV2(DirectoriesMixin, APITestCase):
class TestDocumentApiTagColors(DirectoriesMixin, APITestCase):
def setUp(self) -> None:
super().setUp()
self.user = User.objects.create_superuser(username="temp_admin")
self.client.force_authenticate(user=self.user)
self.client.defaults["HTTP_ACCEPT"] = "application/json; version=2"
def test_tag_validate_color(self) -> None:
self.assertEqual(

View File

@@ -152,7 +152,7 @@ class TestCustomFieldsSearch(DirectoriesMixin, APITestCase):
context={
"request": types.SimpleNamespace(
method="GET",
version="7",
version="9",
),
},
)

View File

@@ -25,3 +25,39 @@ class TestApiSchema(APITestCase):
ui_response = self.client.get(self.ENDPOINT + "view/")
self.assertEqual(ui_response.status_code, status.HTTP_200_OK)
def test_schema_includes_dedicated_document_edit_endpoints(self) -> None:
schema_response = self.client.get(self.ENDPOINT)
self.assertEqual(schema_response.status_code, status.HTTP_200_OK)
paths = schema_response.data["paths"]
self.assertIn("/api/documents/delete/", paths)
self.assertIn("/api/documents/reprocess/", paths)
self.assertIn("/api/documents/rotate/", paths)
self.assertIn("/api/documents/merge/", paths)
self.assertIn("/api/documents/edit_pdf/", paths)
self.assertIn("/api/documents/remove_password/", paths)
def test_schema_bulk_edit_advertises_legacy_document_action_methods(self) -> None:
schema_response = self.client.get(self.ENDPOINT)
self.assertEqual(schema_response.status_code, status.HTTP_200_OK)
schema = schema_response.data["components"]["schemas"]
bulk_schema = schema["BulkEditRequest"]
method_schema = bulk_schema["properties"]["method"]
# drf-spectacular emits the enum as a referenced schema for this field
enum_ref = method_schema["allOf"][0]["$ref"].split("/")[-1]
advertised_methods = schema[enum_ref]["enum"]
for action_method in [
"delete",
"reprocess",
"rotate",
"merge",
"edit_pdf",
"remove_password",
"split",
"delete_pages",
]:
self.assertIn(action_method, advertised_methods)

View File

@@ -405,7 +405,9 @@ class TestBulkEdit(DirectoriesMixin, TestCase):
self.assertTrue(Document.objects.filter(id=self.doc1.id).exists())
self.assertFalse(Document.objects.filter(id=version.id).exists())
def test_get_root_and_current_doc_mapping(self) -> None:
def test_resolve_root_and_source_doc_latest_version_prefers_newest_version(
self,
) -> None:
version1 = Document.objects.create(
checksum="B-v1",
title="B version 1",
@@ -417,18 +419,14 @@ class TestBulkEdit(DirectoriesMixin, TestCase):
root_document=self.doc2,
)
root_ids_by_doc_id = bulk_edit._get_root_ids_by_doc_id(
[self.doc2.id, version1.id, version2.id],
root_doc, source_doc = bulk_edit._resolve_root_and_source_doc(
self.doc2,
source_mode="latest_version",
)
self.assertEqual(root_ids_by_doc_id[self.doc2.id], self.doc2.id)
self.assertEqual(root_ids_by_doc_id[version1.id], self.doc2.id)
self.assertEqual(root_ids_by_doc_id[version2.id], self.doc2.id)
root_docs, current_docs = bulk_edit._get_root_and_current_docs_by_root_id(
{self.doc2.id},
)
self.assertEqual(root_docs[self.doc2.id].id, self.doc2.id)
self.assertEqual(current_docs[self.doc2.id].id, version2.id)
self.assertEqual(root_doc.id, self.doc2.id)
self.assertEqual(source_doc.id, version2.id)
self.assertNotEqual(source_doc.id, version1.id)
@mock.patch("documents.tasks.bulk_update_documents.delay")
def test_set_permissions(self, m) -> None:
@@ -662,6 +660,33 @@ class TestPDFActions(DirectoriesMixin, TestCase):
self.assertEqual(result, "OK")
@mock.patch("pikepdf.open")
@mock.patch("documents.tasks.consume_file.s")
def test_merge_uses_latest_version_source_for_root_selection(
self,
mock_consume_file,
mock_open_pdf,
) -> None:
version_file = self.dirs.scratch_dir / "sample2_version_merge.pdf"
shutil.copy(self.doc2.source_path, version_file)
version = Document.objects.create(
checksum="B-v1",
title="B version 1",
root_document=self.doc2,
filename=version_file,
mime_type="application/pdf",
)
fake_pdf = mock.MagicMock()
fake_pdf.pdf_version = "1.7"
fake_pdf.pages = [mock.Mock()]
mock_open_pdf.return_value.__enter__.return_value = fake_pdf
result = bulk_edit.merge([self.doc2.id])
self.assertEqual(result, "OK")
mock_open_pdf.assert_called_once_with(str(version.source_path))
mock_consume_file.assert_not_called()
@mock.patch("documents.bulk_edit.delete.si")
@mock.patch("documents.tasks.consume_file.s")
def test_merge_and_delete_originals(
@@ -870,6 +895,36 @@ class TestPDFActions(DirectoriesMixin, TestCase):
self.assertEqual(result, "OK")
@mock.patch("documents.bulk_edit.group")
@mock.patch("pikepdf.open")
@mock.patch("documents.tasks.consume_file.s")
def test_split_uses_latest_version_source_for_root_selection(
self,
mock_consume_file,
mock_open_pdf,
mock_group,
) -> None:
version_file = self.dirs.scratch_dir / "sample2_version_split.pdf"
shutil.copy(self.doc2.source_path, version_file)
version = Document.objects.create(
checksum="B-v1",
title="B version 1",
root_document=self.doc2,
filename=version_file,
mime_type="application/pdf",
)
fake_pdf = mock.MagicMock()
fake_pdf.pages = [mock.Mock(), mock.Mock()]
mock_open_pdf.return_value.__enter__.return_value = fake_pdf
mock_group.return_value.delay.return_value = None
result = bulk_edit.split([self.doc2.id], [[1], [2]])
self.assertEqual(result, "OK")
mock_open_pdf.assert_called_once_with(version.source_path)
mock_consume_file.assert_not_called()
mock_group.return_value.delay.assert_not_called()
@mock.patch("documents.bulk_edit.delete.si")
@mock.patch("documents.tasks.consume_file.s")
@mock.patch("documents.bulk_edit.chord")
@@ -1041,6 +1096,34 @@ class TestPDFActions(DirectoriesMixin, TestCase):
self.assertIsNotNone(overrides)
self.assertEqual(result, "OK")
@mock.patch("documents.data_models.magic.from_file", return_value="application/pdf")
@mock.patch("documents.tasks.consume_file.delay")
@mock.patch("pikepdf.open")
def test_rotate_explicit_selection_uses_root_source_when_root_selected(
self,
mock_open,
mock_consume_delay,
mock_magic,
):
Document.objects.create(
checksum="B-v1",
title="B version 1",
root_document=self.doc2,
)
fake_pdf = mock.MagicMock()
fake_pdf.pages = [mock.Mock()]
mock_open.return_value.__enter__.return_value = fake_pdf
result = bulk_edit.rotate(
[self.doc2.id],
90,
source_mode="explicit_selection",
)
self.assertEqual(result, "OK")
mock_open.assert_called_once_with(self.doc2.source_path)
mock_consume_delay.assert_called_once()
@mock.patch("documents.tasks.consume_file.delay")
@mock.patch("pikepdf.Pdf.save")
@mock.patch("documents.data_models.magic.from_file", return_value="application/pdf")
@@ -1065,6 +1148,34 @@ class TestPDFActions(DirectoriesMixin, TestCase):
self.assertIsNotNone(overrides)
self.assertEqual(result, "OK")
@mock.patch("documents.data_models.magic.from_file", return_value="application/pdf")
@mock.patch("documents.tasks.consume_file.delay")
@mock.patch("pikepdf.open")
def test_delete_pages_explicit_selection_uses_root_source_when_root_selected(
self,
mock_open,
mock_consume_delay,
mock_magic,
):
Document.objects.create(
checksum="B-v1",
title="B version 1",
root_document=self.doc2,
)
fake_pdf = mock.MagicMock()
fake_pdf.pages = [mock.Mock(), mock.Mock()]
mock_open.return_value.__enter__.return_value = fake_pdf
result = bulk_edit.delete_pages(
[self.doc2.id],
[1],
source_mode="explicit_selection",
)
self.assertEqual(result, "OK")
mock_open.assert_called_once_with(self.doc2.source_path)
mock_consume_delay.assert_called_once()
@mock.patch("documents.tasks.consume_file.delay")
@mock.patch("pikepdf.Pdf.save")
def test_delete_pages_with_error(self, mock_pdf_save, mock_consume_delay):
@@ -1213,6 +1324,40 @@ class TestPDFActions(DirectoriesMixin, TestCase):
self.assertTrue(str(consumable.original_file).endswith("_edited.pdf"))
self.assertIsNotNone(overrides)
@mock.patch("documents.data_models.magic.from_file", return_value="application/pdf")
@mock.patch("documents.tasks.consume_file.delay")
@mock.patch("pikepdf.new")
@mock.patch("pikepdf.open")
def test_edit_pdf_explicit_selection_uses_root_source_when_root_selected(
self,
mock_open,
mock_new,
mock_consume_delay,
mock_magic,
):
Document.objects.create(
checksum="B-v1",
title="B version 1",
root_document=self.doc2,
)
fake_pdf = mock.MagicMock()
fake_pdf.pages = [mock.Mock()]
mock_open.return_value.__enter__.return_value = fake_pdf
output_pdf = mock.MagicMock()
output_pdf.pages = []
mock_new.return_value = output_pdf
result = bulk_edit.edit_pdf(
[self.doc2.id],
operations=[{"page": 1}],
update_document=True,
source_mode="explicit_selection",
)
self.assertEqual(result, "OK")
mock_open.assert_called_once_with(self.doc2.source_path)
mock_consume_delay.assert_called_once()
@mock.patch("documents.bulk_edit.group")
@mock.patch("documents.tasks.consume_file.s")
def test_edit_pdf_without_metadata(
@@ -1333,6 +1478,34 @@ class TestPDFActions(DirectoriesMixin, TestCase):
self.assertEqual(consumable.root_document_id, doc.id)
self.assertIsNotNone(overrides)
@mock.patch("documents.data_models.magic.from_file", return_value="application/pdf")
@mock.patch("documents.tasks.consume_file.delay")
@mock.patch("pikepdf.open")
def test_remove_password_explicit_selection_uses_root_source_when_root_selected(
self,
mock_open,
mock_consume_delay,
mock_magic,
) -> None:
Document.objects.create(
checksum="A-v1",
title="A version 1",
root_document=self.doc1,
)
fake_pdf = mock.MagicMock()
mock_open.return_value.__enter__.return_value = fake_pdf
result = bulk_edit.remove_password(
[self.doc1.id],
password="secret",
update_document=True,
source_mode="explicit_selection",
)
self.assertEqual(result, "OK")
mock_open.assert_called_once_with(self.doc1.source_path, password="secret")
mock_consume_delay.assert_called_once()
@mock.patch("documents.bulk_edit.chord")
@mock.patch("documents.bulk_edit.group")
@mock.patch("documents.tasks.consume_file.s")

View File

@@ -156,6 +156,46 @@ class TestDocument(TestCase):
)
self.assertEqual(doc.get_public_filename(), "2020-12-25 test")
def test_suggestion_content_uses_latest_version_content_for_root_documents(
self,
) -> None:
root = Document.objects.create(
title="root",
checksum="root",
mime_type="application/pdf",
content="outdated root content",
)
version = Document.objects.create(
title="v1",
checksum="v1",
mime_type="application/pdf",
root_document=root,
content="latest version content",
)
self.assertEqual(root.suggestion_content, version.content)
def test_content_length_is_per_document_row_for_versions(self) -> None:
root = Document.objects.create(
title="root",
checksum="root",
mime_type="application/pdf",
content="abc",
)
version = Document.objects.create(
title="v1",
checksum="v1",
mime_type="application/pdf",
root_document=root,
content="abcdefgh",
)
root.refresh_from_db()
version.refresh_from_db()
self.assertEqual(root.content_length, 3)
self.assertEqual(version.content_length, 8)
def test_suggestion_content() -> None:
"""

View File

@@ -147,7 +147,6 @@ class TestExportImport(
else:
raise ValueError(f"document with id {id} does not exist in manifest")
@override_settings(PASSPHRASE="test")
def _do_export(
self,
*,
@@ -441,7 +440,6 @@ class TestExportImport(
)
self.assertRaises(FileNotFoundError, call_command, "document_exporter", target)
@override_settings(PASSPHRASE="test")
def test_export_zipped(self) -> None:
"""
GIVEN:
@@ -473,7 +471,6 @@ class TestExportImport(
self.assertIn("manifest.json", zip.namelist())
self.assertIn("metadata.json", zip.namelist())
@override_settings(PASSPHRASE="test")
def test_export_zipped_format(self) -> None:
"""
GIVEN:
@@ -510,7 +507,6 @@ class TestExportImport(
self.assertIn("manifest.json", zip.namelist())
self.assertIn("metadata.json", zip.namelist())
@override_settings(PASSPHRASE="test")
def test_export_zipped_with_delete(self) -> None:
"""
GIVEN:
@@ -753,6 +749,31 @@ class TestExportImport(
call_command("document_importer", "--no-progress-bar", self.target)
self.assertEqual(Document.objects.count(), 4)
def test_folder_prefix_with_split(self) -> None:
"""
GIVEN:
- Request to export documents to directory
WHEN:
- Option use_folder_prefix is used
- Option split manifest is used
THEN:
- Documents can be imported again
"""
shutil.rmtree(Path(self.dirs.media_dir) / "documents")
shutil.copytree(
Path(__file__).parent / "samples" / "documents",
Path(self.dirs.media_dir) / "documents",
)
self._do_export(use_folder_prefix=True, split_manifest=True)
with paperless_environment():
self.assertEqual(Document.objects.count(), 4)
Document.objects.all().delete()
self.assertEqual(Document.objects.count(), 0)
call_command("document_importer", "--no-progress-bar", self.target)
self.assertEqual(Document.objects.count(), 4)
def test_import_db_transaction_failed(self) -> None:
"""
GIVEN:

View File

@@ -119,15 +119,22 @@ class TestCommandImport(
# No read permissions
original_path.chmod(0o222)
manifest_path = Path(temp_dir) / "manifest.json"
manifest_path.write_text(
json.dumps(
[
{
"model": "documents.document",
EXPORTER_FILE_NAME: "original.pdf",
EXPORTER_ARCHIVE_NAME: "archive.pdf",
},
],
),
)
cmd = Command()
cmd.source = Path(temp_dir)
cmd.manifest = [
{
"model": "documents.document",
EXPORTER_FILE_NAME: "original.pdf",
EXPORTER_ARCHIVE_NAME: "archive.pdf",
},
]
cmd.manifest_paths = [manifest_path]
cmd.data_only = False
with self.assertRaises(CommandError) as cm:
cmd.check_manifest_validity()
@@ -296,7 +303,7 @@ class TestCommandImport(
(self.dirs.scratch_dir / "manifest.json").touch()
# We're not building a manifest, so it fails, but this test doesn't care
with self.assertRaises(json.decoder.JSONDecodeError):
with self.assertRaises(CommandError):
call_command(
"document_importer",
"--no-progress-bar",
@@ -325,7 +332,7 @@ class TestCommandImport(
)
# We're not building a manifest, so it fails, but this test doesn't care
with self.assertRaises(json.decoder.JSONDecodeError):
with self.assertRaises(CommandError):
call_command(
"document_importer",
"--no-progress-bar",

View File

@@ -48,6 +48,52 @@ class _TestMatchingBase(TestCase):
class TestMatching(_TestMatchingBase):
def test_matches_uses_latest_version_content_for_root_documents(self) -> None:
root = Document.objects.create(
title="root",
checksum="root",
mime_type="application/pdf",
content="root content without token",
)
Document.objects.create(
title="v1",
checksum="v1",
mime_type="application/pdf",
root_document=root,
content="latest version contains keyword",
)
tag = Tag.objects.create(
name="tag",
match="keyword",
matching_algorithm=Tag.MATCH_ANY,
)
self.assertTrue(matching.matches(tag, root))
def test_matches_does_not_fall_back_to_root_content_when_version_exists(
self,
) -> None:
root = Document.objects.create(
title="root",
checksum="root",
mime_type="application/pdf",
content="root contains keyword",
)
Document.objects.create(
title="v1",
checksum="v1",
mime_type="application/pdf",
root_document=root,
content="latest version without token",
)
tag = Tag.objects.create(
name="tag",
match="keyword",
matching_algorithm=Tag.MATCH_ANY,
)
self.assertFalse(matching.matches(tag, root))
def test_match_none(self) -> None:
self._test_matching(
"",

View File

@@ -6,8 +6,8 @@ SIDEBAR_VIEWS_VISIBLE_IDS_KEY = "sidebar_views_visible_ids"
class TestMigrateSavedViewVisibilityToUiSettings(TestMigrations):
migrate_from = "0013_document_root_document"
migrate_to = "0015_savedview_visibility_to_ui_settings"
migrate_from = "0013_alter_paperlesstask_task_name"
migrate_to = "0014_savedview_visibility_to_ui_settings"
def setUpBeforeMigration(self, apps) -> None:
User = apps.get_model("auth", "User")
@@ -132,8 +132,8 @@ class TestMigrateSavedViewVisibilityToUiSettings(TestMigrations):
class TestReverseMigrateSavedViewVisibilityFromUiSettings(TestMigrations):
migrate_from = "0015_savedview_visibility_to_ui_settings"
migrate_to = "0014_alter_paperlesstask_task_name"
migrate_from = "0014_savedview_visibility_to_ui_settings"
migrate_to = "0013_alter_paperlesstask_task_name"
def setUpBeforeMigration(self, apps) -> None:
User = apps.get_model("auth", "User")

View File

@@ -2,8 +2,8 @@ from documents.tests.utils import TestMigrations
class TestMigrateShareLinkBundlePermissions(TestMigrations):
migrate_from = "0007_document_content_length"
migrate_to = "0008_sharelinkbundle"
migrate_from = "0006_document_content_length"
migrate_to = "0007_sharelinkbundle"
def setUpBeforeMigration(self, apps) -> None:
User = apps.get_model("auth", "User")
@@ -24,8 +24,8 @@ class TestMigrateShareLinkBundlePermissions(TestMigrations):
class TestReverseMigrateShareLinkBundlePermissions(TestMigrations):
migrate_from = "0008_sharelinkbundle"
migrate_to = "0007_document_content_length"
migrate_from = "0007_sharelinkbundle"
migrate_to = "0006_document_content_length"
def setUpBeforeMigration(self, apps) -> None:
User = apps.get_model("auth", "User")

View File

@@ -9,8 +9,8 @@ from documents.parsers import get_default_file_extension
from documents.parsers import get_parser_class_for_mime_type
from documents.parsers import get_supported_file_extensions
from documents.parsers import is_file_ext_supported
from paperless.parsers.text import TextDocumentParser
from paperless_tesseract.parsers import RasterisedDocumentParser
from paperless_text.parsers import TextDocumentParser
from paperless_tika.parsers import TikaDocumentParser

View File

@@ -176,14 +176,20 @@ from documents.serialisers import BulkEditObjectsSerializer
from documents.serialisers import BulkEditSerializer
from documents.serialisers import CorrespondentSerializer
from documents.serialisers import CustomFieldSerializer
from documents.serialisers import DeleteDocumentsSerializer
from documents.serialisers import DocumentListSerializer
from documents.serialisers import DocumentSerializer
from documents.serialisers import DocumentTypeSerializer
from documents.serialisers import DocumentVersionLabelSerializer
from documents.serialisers import DocumentVersionSerializer
from documents.serialisers import EditPdfDocumentsSerializer
from documents.serialisers import EmailSerializer
from documents.serialisers import MergeDocumentsSerializer
from documents.serialisers import NotesSerializer
from documents.serialisers import PostDocumentSerializer
from documents.serialisers import RemovePasswordDocumentsSerializer
from documents.serialisers import ReprocessDocumentsSerializer
from documents.serialisers import RotateDocumentsSerializer
from documents.serialisers import RunTaskViewSerializer
from documents.serialisers import SavedViewSerializer
from documents.serialisers import SearchResultSerializer
@@ -1522,13 +1528,17 @@ class DocumentViewSet(
return Response(sorted(entries, key=lambda x: x["timestamp"], reverse=True))
@extend_schema(
operation_id="documents_email_document",
deprecated=True,
)
@action(
methods=["post"],
detail=True,
url_path="email",
permission_classes=[IsAuthenticated, ViewDocumentsPermissions],
)
# TODO: deprecated as of 2.19, remove in future release
# TODO: deprecated, remove with drop of support for API v9
def email_document(self, request, pk=None):
request_data = request.data.copy()
request_data.setlist("documents", [pk])
@@ -2114,6 +2124,125 @@ class SavedViewViewSet(BulkPermissionMixin, PassUserMixin, ModelViewSet):
ordering_fields = ("name",)
class DocumentOperationPermissionMixin(PassUserMixin):
permission_classes = (IsAuthenticated,)
parser_classes = (parsers.JSONParser,)
METHOD_NAMES_REQUIRING_USER = {
"split",
"merge",
"rotate",
"delete_pages",
"edit_pdf",
"remove_password",
}
def _has_document_permissions(
self,
*,
user: User,
documents: list[int],
method,
parameters: dict[str, Any],
) -> bool:
if user.is_superuser:
return True
document_objs = Document.objects.select_related("owner").filter(
pk__in=documents,
)
user_is_owner_of_all_documents = all(
(doc.owner == user or doc.owner is None) for doc in document_objs
)
# check global and object permissions for all documents
has_perms = user.has_perm("documents.change_document") and all(
has_perms_owner_aware(user, "change_document", doc) for doc in document_objs
)
# check ownership for methods that change original document
if (
(
has_perms
and method
in [
bulk_edit.set_permissions,
bulk_edit.delete,
bulk_edit.rotate,
bulk_edit.delete_pages,
bulk_edit.edit_pdf,
bulk_edit.remove_password,
]
)
or (
method in [bulk_edit.merge, bulk_edit.split]
and parameters.get("delete_originals")
)
or (method == bulk_edit.edit_pdf and parameters.get("update_document"))
):
has_perms = user_is_owner_of_all_documents
# check global add permissions for methods that create documents
if (
has_perms
and (
method in [bulk_edit.split, bulk_edit.merge]
or (
method in [bulk_edit.edit_pdf, bulk_edit.remove_password]
and not parameters.get("update_document")
)
)
and not user.has_perm("documents.add_document")
):
has_perms = False
# check global delete permissions for methods that delete documents
if (
has_perms
and (
method == bulk_edit.delete
or (
method in [bulk_edit.merge, bulk_edit.split]
and parameters.get("delete_originals")
)
)
and not user.has_perm("documents.delete_document")
):
has_perms = False
return has_perms
def _execute_document_action(
self,
*,
method,
validated_data: dict[str, Any],
operation_label: str,
):
documents = validated_data["documents"]
parameters = {k: v for k, v in validated_data.items() if k != "documents"}
user = self.request.user
if method.__name__ in self.METHOD_NAMES_REQUIRING_USER:
parameters["user"] = user
if not self._has_document_permissions(
user=user,
documents=documents,
method=method,
parameters=parameters,
):
return HttpResponseForbidden("Insufficient permissions")
try:
result = method(documents, **parameters)
return Response({"result": result})
except Exception as e:
logger.warning(f"An error occurred performing {operation_label}: {e!s}")
return HttpResponseBadRequest(
f"Error performing {operation_label}, check logs for more detail.",
)
@extend_schema_view(
post=extend_schema(
operation_id="bulk_edit",
@@ -2132,7 +2261,7 @@ class SavedViewViewSet(BulkPermissionMixin, PassUserMixin, ModelViewSet):
},
),
)
class BulkEditView(PassUserMixin):
class BulkEditView(DocumentOperationPermissionMixin):
MODIFIED_FIELD_BY_METHOD = {
"set_correspondent": "correspondent",
"set_document_type": "document_type",
@@ -2154,11 +2283,24 @@ class BulkEditView(PassUserMixin):
"remove_password": None,
}
permission_classes = (IsAuthenticated,)
serializer_class = BulkEditSerializer
parser_classes = (parsers.JSONParser,)
def post(self, request, *args, **kwargs):
request_method = request.data.get("method")
api_version = int(request.version or settings.REST_FRAMEWORK["DEFAULT_VERSION"])
# TODO: remove this and related backwards compatibility code when API v9 is dropped
if request_method in BulkEditSerializer.LEGACY_DOCUMENT_ACTION_METHODS:
endpoint = BulkEditSerializer.MOVED_DOCUMENT_ACTION_ENDPOINTS[
request_method
]
logger.warning(
"Deprecated bulk_edit method '%s' requested on API version %s. "
"Use '%s' instead.",
request_method,
api_version,
endpoint,
)
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
@@ -2166,82 +2308,15 @@ class BulkEditView(PassUserMixin):
method = serializer.validated_data.get("method")
parameters = serializer.validated_data.get("parameters")
documents = serializer.validated_data.get("documents")
if method in [
bulk_edit.split,
bulk_edit.merge,
bulk_edit.rotate,
bulk_edit.delete_pages,
bulk_edit.edit_pdf,
bulk_edit.remove_password,
]:
if method.__name__ in self.METHOD_NAMES_REQUIRING_USER:
parameters["user"] = user
if not user.is_superuser:
document_objs = Document.objects.select_related("owner").filter(
pk__in=documents,
)
user_is_owner_of_all_documents = all(
(doc.owner == user or doc.owner is None) for doc in document_objs
)
# check global and object permissions for all documents
has_perms = user.has_perm("documents.change_document") and all(
has_perms_owner_aware(user, "change_document", doc)
for doc in document_objs
)
# check ownership for methods that change original document
if (
(
has_perms
and method
in [
bulk_edit.set_permissions,
bulk_edit.delete,
bulk_edit.rotate,
bulk_edit.delete_pages,
bulk_edit.edit_pdf,
bulk_edit.remove_password,
]
)
or (
method in [bulk_edit.merge, bulk_edit.split]
and parameters["delete_originals"]
)
or (method == bulk_edit.edit_pdf and parameters["update_document"])
):
has_perms = user_is_owner_of_all_documents
# check global add permissions for methods that create documents
if (
has_perms
and (
method in [bulk_edit.split, bulk_edit.merge]
or (
method in [bulk_edit.edit_pdf, bulk_edit.remove_password]
and not parameters["update_document"]
)
)
and not user.has_perm("documents.add_document")
):
has_perms = False
# check global delete permissions for methods that delete documents
if (
has_perms
and (
method == bulk_edit.delete
or (
method in [bulk_edit.merge, bulk_edit.split]
and parameters["delete_originals"]
)
)
and not user.has_perm("documents.delete_document")
):
has_perms = False
if not has_perms:
return HttpResponseForbidden("Insufficient permissions")
if not self._has_document_permissions(
user=user,
documents=documents,
method=method,
parameters=parameters,
):
return HttpResponseForbidden("Insufficient permissions")
try:
modified_field = self.MODIFIED_FIELD_BY_METHOD.get(method.__name__, None)
@@ -2298,6 +2373,168 @@ class BulkEditView(PassUserMixin):
)
@extend_schema_view(
post=extend_schema(
operation_id="documents_rotate",
description="Rotate one or more documents",
responses={
200: inline_serializer(
name="RotateDocumentsResult",
fields={
"result": serializers.CharField(),
},
),
},
),
)
class RotateDocumentsView(DocumentOperationPermissionMixin):
serializer_class = RotateDocumentsSerializer
def post(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
return self._execute_document_action(
method=bulk_edit.rotate,
validated_data=serializer.validated_data,
operation_label="document rotate",
)
@extend_schema_view(
post=extend_schema(
operation_id="documents_merge",
description="Merge selected documents into a new document",
responses={
200: inline_serializer(
name="MergeDocumentsResult",
fields={
"result": serializers.CharField(),
},
),
},
),
)
class MergeDocumentsView(DocumentOperationPermissionMixin):
serializer_class = MergeDocumentsSerializer
def post(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
return self._execute_document_action(
method=bulk_edit.merge,
validated_data=serializer.validated_data,
operation_label="document merge",
)
@extend_schema_view(
post=extend_schema(
operation_id="documents_delete",
description="Move selected documents to trash",
responses={
200: inline_serializer(
name="DeleteDocumentsResult",
fields={
"result": serializers.CharField(),
},
),
},
),
)
class DeleteDocumentsView(DocumentOperationPermissionMixin):
serializer_class = DeleteDocumentsSerializer
def post(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
return self._execute_document_action(
method=bulk_edit.delete,
validated_data=serializer.validated_data,
operation_label="document delete",
)
@extend_schema_view(
post=extend_schema(
operation_id="documents_reprocess",
description="Reprocess selected documents",
responses={
200: inline_serializer(
name="ReprocessDocumentsResult",
fields={
"result": serializers.CharField(),
},
),
},
),
)
class ReprocessDocumentsView(DocumentOperationPermissionMixin):
serializer_class = ReprocessDocumentsSerializer
def post(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
return self._execute_document_action(
method=bulk_edit.reprocess,
validated_data=serializer.validated_data,
operation_label="document reprocess",
)
@extend_schema_view(
post=extend_schema(
operation_id="documents_edit_pdf",
description="Perform PDF edit operations on a selected document",
responses={
200: inline_serializer(
name="EditPdfDocumentsResult",
fields={
"result": serializers.CharField(),
},
),
},
),
)
class EditPdfDocumentsView(DocumentOperationPermissionMixin):
serializer_class = EditPdfDocumentsSerializer
def post(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
return self._execute_document_action(
method=bulk_edit.edit_pdf,
validated_data=serializer.validated_data,
operation_label="PDF edit",
)
@extend_schema_view(
post=extend_schema(
operation_id="documents_remove_password",
description="Remove password protection from selected PDFs",
responses={
200: inline_serializer(
name="RemovePasswordDocumentsResult",
fields={
"result": serializers.CharField(),
},
),
},
),
)
class RemovePasswordDocumentsView(DocumentOperationPermissionMixin):
serializer_class = RemovePasswordDocumentsSerializer
def post(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
return self._execute_document_action(
method=bulk_edit.remove_password,
validated_data=serializer.validated_data,
operation_label="password removal",
)
@extend_schema_view(
post=extend_schema(
description="Upload a document via the API",

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,7 @@
import os
from celery import Celery
from celery.signals import worker_process_init
# Set the default Django settings module for the 'celery' program.
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "paperless.settings")
@@ -15,3 +16,19 @@ app.config_from_object("django.conf:settings", namespace="CELERY")
# Load task modules from all registered Django apps.
app.autodiscover_tasks()
@worker_process_init.connect
def on_worker_process_init(**kwargs) -> None: # pragma: no cover
"""
Register built-in parsers eagerly in each Celery worker process.
This registers only the built-in parsers (no entrypoint discovery) so
that workers can begin consuming documents immediately. Entrypoint
discovery for third-party parsers is deferred to the first call of
get_parser_registry() inside a task, keeping worker_process_init
well within its 4-second timeout budget.
"""
from paperless.parsers.registry import init_builtin_parsers
init_builtin_parsers()

View File

@@ -7,6 +7,7 @@ from pathlib import Path
from django.conf import settings
from django.core.checks import Error
from django.core.checks import Tags
from django.core.checks import Warning
from django.core.checks import register
from django.db import connections
@@ -204,15 +205,16 @@ def audit_log_check(app_configs, **kwargs):
return result
@register()
@register(Tags.compatibility)
def check_v3_minimum_upgrade_version(
app_configs: object,
**kwargs: object,
) -> list[Error]:
"""Enforce that upgrades to v3 must start from v2.20.9.
"""
Enforce that upgrades to v3 must start from v2.20.10.
v3 squashes all prior migrations into 0001_squashed and 0002_squashed.
If a user skips v2.20.9, the data migration in 1075_workflowaction_order
If a user skips v2.20.10, the data migration in 1075_workflowaction_order
never runs and the squash may apply schema changes against an incomplete
database state.
"""
@@ -239,7 +241,7 @@ def check_v3_minimum_upgrade_version(
if {"0001_squashed", "0002_squashed"} & applied:
return []
# On v2.20.9 exactly — squash will pick up cleanly from here
# On v2.20.10 exactly — squash will pick up cleanly from here
if "1075_workflowaction_order" in applied:
return []
@@ -250,8 +252,8 @@ def check_v3_minimum_upgrade_version(
Error(
"Cannot upgrade to Paperless-ngx v3 from this version.",
hint=(
"Upgrading to v3 can only be performed from v2.20.9."
"Please upgrade to v2.20.9, run migrations, then upgrade to v3."
"Upgrading to v3 can only be performed from v2.20.10."
"Please upgrade to v2.20.10, run migrations, then upgrade to v3."
"See https://docs.paperless-ngx.com/setup/#upgrading for details."
),
id="paperless.E002",

View File

@@ -1,62 +1,51 @@
import json
from typing import Any
from asgiref.sync import async_to_sync
from channels.exceptions import AcceptConnection
from channels.exceptions import DenyConnection
from channels.generic.websocket import WebsocketConsumer
from channels.generic.websocket import AsyncWebsocketConsumer
class StatusConsumer(WebsocketConsumer):
def _authenticated(self):
return "user" in self.scope and self.scope["user"].is_authenticated
class StatusConsumer(AsyncWebsocketConsumer):
def _authenticated(self) -> bool:
user: Any = self.scope.get("user")
return user is not None and user.is_authenticated
def _can_view(self, data):
user = self.scope.get("user") if self.scope.get("user") else None
async def _can_view(self, data: dict[str, Any]) -> bool:
user: Any = self.scope.get("user")
if user is None:
return False
owner_id = data.get("owner_id")
users_can_view = data.get("users_can_view", [])
groups_can_view = data.get("groups_can_view", [])
return (
user.is_superuser
or user.id == owner_id
or user.id in users_can_view
or any(
user.groups.filter(pk=group_id).exists() for group_id in groups_can_view
)
)
def connect(self):
if user.is_superuser or user.id == owner_id or user.id in users_can_view:
return True
return await user.groups.filter(pk__in=groups_can_view).aexists()
async def connect(self) -> None:
if not self._authenticated():
raise DenyConnection
else:
async_to_sync(self.channel_layer.group_add)(
"status_updates",
self.channel_name,
)
raise AcceptConnection
await self.close()
return
await self.channel_layer.group_add("status_updates", self.channel_name)
await self.accept()
def disconnect(self, close_code) -> None:
async_to_sync(self.channel_layer.group_discard)(
"status_updates",
self.channel_name,
)
async def disconnect(self, code: int) -> None:
await self.channel_layer.group_discard("status_updates", self.channel_name)
def status_update(self, event) -> None:
async def status_update(self, event: dict[str, Any]) -> None:
if not self._authenticated():
self.close()
else:
if self._can_view(event["data"]):
self.send(json.dumps(event))
await self.close()
elif await self._can_view(event["data"]):
await self.send(json.dumps(event))
def documents_deleted(self, event) -> None:
async def documents_deleted(self, event: dict[str, Any]) -> None:
if not self._authenticated():
self.close()
await self.close()
else:
self.send(json.dumps(event))
await self.send(json.dumps(event))
def document_updated(self, event: Any) -> None:
async def document_updated(self, event: dict[str, Any]) -> None:
if not self._authenticated():
self.close()
else:
if self._can_view(event["data"]):
self.send(json.dumps(event))
await self.close()
elif await self._can_view(event["data"]):
await self.send(json.dumps(event))

View File

@@ -0,0 +1,379 @@
"""
Public interface for the Paperless-ngx parser plugin system.
This module defines ParserProtocol — the structural contract that every
document parser must satisfy, whether it is a built-in parser shipped with
Paperless-ngx or a third-party parser installed via a Python entrypoint.
Phase 1/2 scope: only the Protocol is defined here. The transitional
DocumentParser ABC (Phase 3) and concrete built-in parsers (Phase 3+) will
be added in later phases, so there are intentionally no imports of parser
implementations here.
Usage example (third-party parser)::
from paperless.parsers import ParserProtocol
class MyParser:
name = "my-parser"
version = "1.0.0"
author = "Acme Corp"
url = "https://example.com/my-parser"
@classmethod
def supported_mime_types(cls) -> dict[str, str]:
return {"application/x-my-format": ".myf"}
@classmethod
def score(cls, mime_type, filename, path=None):
return 10
# … implement remaining protocol methods …
assert isinstance(MyParser(), ParserProtocol)
"""
from __future__ import annotations
from typing import TYPE_CHECKING
from typing import Protocol
from typing import Self
from typing import TypedDict
from typing import runtime_checkable
if TYPE_CHECKING:
import datetime
from pathlib import Path
from types import TracebackType
__all__ = [
"MetadataEntry",
"ParserProtocol",
]
class MetadataEntry(TypedDict):
"""A single metadata field extracted from a document.
All four keys are required. Values are always serialised to strings —
type-specific conversion (dates, integers, lists) is the responsibility
of the parser before returning.
"""
namespace: str
"""URI of the metadata namespace (e.g. 'http://ns.adobe.com/pdf/1.3/')."""
prefix: str
"""Conventional namespace prefix (e.g. 'pdf', 'xmp', 'dc')."""
key: str
"""Field name within the namespace (e.g. 'Author', 'CreateDate')."""
value: str
"""String representation of the field value."""
@runtime_checkable
class ParserProtocol(Protocol):
"""Structural contract for all Paperless-ngx document parsers.
Both built-in parsers and third-party plugins (discovered via the
"paperless_ngx.parsers" entrypoint group) must satisfy this Protocol.
Because it is decorated with runtime_checkable, isinstance(obj,
ParserProtocol) works at runtime based on method presence, which is
useful for validation in ParserRegistry.discover.
Parsers must expose four string attributes at the class level so the
registry can log attribution information without instantiating the parser:
name : str
Human-readable parser name (e.g. "Tesseract OCR").
version : str
Semantic version string (e.g. "1.2.3").
author : str
Author or organisation name.
url : str
URL for documentation, source code, or issue tracker.
"""
# ------------------------------------------------------------------
# Class-level identity (checked by the registry, not Protocol methods)
# ------------------------------------------------------------------
name: str
version: str
author: str
url: str
# ------------------------------------------------------------------
# Class methods
# ------------------------------------------------------------------
@classmethod
def supported_mime_types(cls) -> dict[str, str]:
"""Return a mapping of supported MIME types to preferred file extensions.
The keys are MIME type strings (e.g. "application/pdf"), and the
values are the preferred file extension including the leading dot
(e.g. ".pdf"). The registry uses this mapping both to decide whether
a parser is a candidate for a given file and to determine the default
extension when creating archive copies.
Returns
-------
dict[str, str]
{mime_type: extension} mapping — may be empty if the parser
has been temporarily disabled.
"""
...
@classmethod
def score(
cls,
mime_type: str,
filename: str,
path: Path | None = None,
) -> int | None:
"""Return a priority score for handling this file, or None to decline.
The registry calls this after confirming that the MIME type is in
supported_mime_types. Parsers may inspect filename and optionally
the file at path to refine their confidence level.
A higher score wins. Return None to explicitly decline handling a file
even though the MIME type is listed as supported (e.g. when a feature
flag is disabled, or a required service is not configured).
Parameters
----------
mime_type:
The detected MIME type of the file to be parsed.
filename:
The original filename, including extension.
path:
Optional filesystem path to the file. Parsers that need to
inspect file content (e.g. magic-byte sniffing) may use this.
May be None when scoring happens before the file is available locally.
Returns
-------
int | None
Priority score (higher wins), or None to decline.
"""
...
# ------------------------------------------------------------------
# Properties
# ------------------------------------------------------------------
@property
def can_produce_archive(self) -> bool:
"""Whether this parser can produce a searchable PDF archive copy.
If True, the consumption pipeline may request an archive version when
processing the document, subject to the ARCHIVE_FILE_GENERATION
setting. If False, only thumbnail and text extraction are performed.
"""
...
@property
def requires_pdf_rendition(self) -> bool:
"""Whether the parser must produce a PDF for the frontend to display.
True for formats the browser cannot display natively (e.g. DOCX, ODT).
When True, the pipeline always stores the PDF output regardless of the
ARCHIVE_FILE_GENERATION setting, since the original format cannot be
shown to the user.
"""
...
# ------------------------------------------------------------------
# Core parsing interface
# ------------------------------------------------------------------
def parse(
self,
document_path: Path,
mime_type: str,
*,
produce_archive: bool = True,
) -> None:
"""Parse document_path and populate internal state.
After a successful call, callers retrieve results via get_text,
get_date, and get_archive_path.
Parameters
----------
document_path:
Absolute path to the document file to parse.
mime_type:
Detected MIME type of the document.
produce_archive:
When True (the default) and can_produce_archive is also True,
the parser should produce a searchable PDF at the path returned
by get_archive_path. Pass False when only text extraction and
thumbnail generation are required and disk I/O should be minimised.
Raises
------
documents.parsers.ParseError
If parsing fails for any reason.
"""
...
# ------------------------------------------------------------------
# Result accessors
# ------------------------------------------------------------------
def get_text(self) -> str | None:
"""Return the plain-text content extracted during parse.
Returns
-------
str | None
Extracted text, or None if no text could be found.
"""
...
def get_date(self) -> datetime.datetime | None:
"""Return the document date detected during parse.
Returns
-------
datetime.datetime | None
Detected document date, or None if no date was found.
"""
...
def get_archive_path(self) -> Path | None:
"""Return the path to the generated archive PDF, or None.
Returns
-------
Path | None
Path to the searchable PDF archive, or None if no archive was
produced (e.g. because produce_archive=False or the parser does
not support archive generation).
"""
...
# ------------------------------------------------------------------
# Thumbnail and metadata
# ------------------------------------------------------------------
def get_thumbnail(self, document_path: Path, mime_type: str) -> Path:
"""Generate and return the path to a thumbnail image for the document.
May be called independently of parse. The returned path must point to
an existing WebP image file inside the parser's temporary working
directory.
Parameters
----------
document_path:
Absolute path to the source document.
mime_type:
Detected MIME type of the document.
Returns
-------
Path
Path to the generated thumbnail image (WebP format preferred).
"""
...
def get_page_count(
self,
document_path: Path,
mime_type: str,
) -> int | None:
"""Return the number of pages in the document, if determinable.
Parameters
----------
document_path:
Absolute path to the source document.
mime_type:
Detected MIME type of the document.
Returns
-------
int | None
Page count, or None if the parser cannot determine it.
"""
...
def extract_metadata(
self,
document_path: Path,
mime_type: str,
) -> list[MetadataEntry]:
"""Extract format-specific metadata from the document.
Called by the API view layer on demand — not during the consumption
pipeline. Results are returned to the frontend for per-file display.
For documents with an archive version, this method is called twice:
once for the original file (with its native MIME type) and once for
the archive file (with ``"application/pdf"``). Parsers that produce
archives should handle both cases.
Implementations must not raise. A failure to read metadata is not
fatal — log a warning and return whatever partial results were
collected, or ``[]`` if none.
Parameters
----------
document_path:
Absolute path to the file to extract metadata from.
mime_type:
MIME type of the file at ``document_path``. May be
``"application/pdf"`` when called for the archive version.
Returns
-------
list[MetadataEntry]
Zero or more metadata entries. Returns ``[]`` if no metadata
could be extracted or the format does not support it.
"""
...
# ------------------------------------------------------------------
# Context manager
# ------------------------------------------------------------------
def __enter__(self) -> Self:
"""Enter the parser context, returning the parser instance.
Implementations should perform any resource allocation here if not
done in __init__ (e.g. creating API clients or temp directories).
Returns
-------
Self
The parser instance itself.
"""
...
def __exit__(
self,
exc_type: type[BaseException] | None,
exc_val: BaseException | None,
exc_tb: TracebackType | None,
) -> None:
"""Exit the parser context and release all resources.
Implementations must clean up all temporary files and other resources
regardless of whether an exception occurred.
Parameters
----------
exc_type:
The exception class, or None if no exception was raised.
exc_val:
The exception instance, or None.
exc_tb:
The traceback, or None.
"""
...

View File

@@ -0,0 +1,366 @@
"""
Singleton registry that tracks all document parsers available to
Paperless-ngx — both built-ins shipped with the application and third-party
plugins installed via Python entrypoints.
Public surface
--------------
get_parser_registry
Lazy-initialise and return the shared ParserRegistry. This is the primary
entry point for production code.
init_builtin_parsers
Register built-in parsers only, without entrypoint discovery. Safe to
call from Celery worker_process_init where importing all entrypoints
would be wasteful or cause side effects.
reset_parser_registry
Reset module-level state. For tests only.
Entrypoint group
----------------
Third-party parsers must advertise themselves under the
"paperless_ngx.parsers" entrypoint group in their pyproject.toml::
[project.entry-points."paperless_ngx.parsers"]
my_parser = "my_package.parsers:MyParser"
The loaded class must expose the following attributes at the class level
(not just on instances) for the registry to accept it:
name, version, author, url, supported_mime_types (callable), score (callable).
"""
from __future__ import annotations
import logging
from importlib.metadata import entry_points
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from pathlib import Path
from paperless.parsers import ParserProtocol
logger = logging.getLogger("paperless.parsers.registry")
# ---------------------------------------------------------------------------
# Module-level singleton state
# ---------------------------------------------------------------------------
_registry: ParserRegistry | None = None
_discovery_complete: bool = False
# Attribute names that every registered external parser class must expose.
_REQUIRED_ATTRS: tuple[str, ...] = (
"name",
"version",
"author",
"url",
"supported_mime_types",
"score",
)
# ---------------------------------------------------------------------------
# Module-level accessor functions
# ---------------------------------------------------------------------------
def get_parser_registry() -> ParserRegistry:
"""Return the shared ParserRegistry instance.
On the first call this function:
1. Creates a new ParserRegistry.
2. Calls register_defaults to install built-in parsers.
3. Calls discover to load third-party plugins via importlib.metadata entrypoints.
4. Calls log_summary to emit a startup summary.
Subsequent calls return the same instance immediately.
Returns
-------
ParserRegistry
The shared registry singleton.
"""
global _registry, _discovery_complete
if _registry is None:
_registry = ParserRegistry()
_registry.register_defaults()
if not _discovery_complete:
_registry.discover()
_registry.log_summary()
_discovery_complete = True
return _registry
def init_builtin_parsers() -> None:
"""Register built-in parsers without performing entrypoint discovery.
Intended for use in Celery worker_process_init handlers where importing
all installed entrypoints would be wasteful, slow, or could produce
undesirable side effects. Entrypoint discovery (third-party plugins) is
deliberately not performed.
Safe to call multiple times — subsequent calls are no-ops.
Returns
-------
None
"""
global _registry
if _registry is None:
_registry = ParserRegistry()
_registry.register_defaults()
def reset_parser_registry() -> None:
"""Reset the module-level registry state to its initial values.
Resets _registry and _discovery_complete so the next call to
get_parser_registry will re-initialise everything from scratch.
FOR TESTS ONLY. Do not call this in production code — resetting the
registry mid-request causes all subsequent parser lookups to go through
discovery again, which is expensive and may have unexpected side effects
in multi-threaded environments.
Returns
-------
None
"""
global _registry, _discovery_complete
_registry = None
_discovery_complete = False
# ---------------------------------------------------------------------------
# Registry class
# ---------------------------------------------------------------------------
class ParserRegistry:
"""Registry that maps MIME types to the best available parser class.
Parsers are partitioned into two lists:
_builtins
Parser classes registered via register_builtin (populated by
register_defaults in Phase 3+).
_external
Parser classes loaded from installed Python entrypoints via discover.
When resolving a parser for a file, external parsers are evaluated
alongside built-in parsers using a uniform scoring mechanism. Both lists
are iterated together; the class with the highest score wins. If an
external parser wins, its attribution details are logged so users can
identify which third-party package handled their document.
"""
def __init__(self) -> None:
self._external: list[type[ParserProtocol]] = []
self._builtins: list[type[ParserProtocol]] = []
# ------------------------------------------------------------------
# Registration
# ------------------------------------------------------------------
def register_builtin(self, parser_class: type[ParserProtocol]) -> None:
"""Register a built-in parser class.
Built-in parsers are shipped with Paperless-ngx and are appended to
the _builtins list. They are never overridden by external parsers;
instead, scoring determines which parser wins for any given file.
Parameters
----------
parser_class:
The parser class to register. Must satisfy ParserProtocol.
"""
self._builtins.append(parser_class)
def register_defaults(self) -> None:
"""Register the built-in parsers that ship with Paperless-ngx.
Each parser that has been migrated to the new ParserProtocol interface
is registered here. Parsers are added in ascending weight order so
that log output is predictable; scoring determines which parser wins
at runtime regardless of registration order.
"""
from paperless.parsers.remote import RemoteDocumentParser
from paperless.parsers.text import TextDocumentParser
self.register_builtin(TextDocumentParser)
self.register_builtin(RemoteDocumentParser)
# ------------------------------------------------------------------
# Discovery
# ------------------------------------------------------------------
def discover(self) -> None:
"""Load third-party parsers from the "paperless_ngx.parsers" entrypoint group.
For each advertised entrypoint the method:
1. Calls ep.load() to import the class.
2. Validates that the class exposes all required attributes.
3. On success, appends the class to _external and logs an info message.
4. On failure (import error or missing attributes), logs an appropriate
warning/error and continues to the next entrypoint.
Errors during discovery of a single parser do not prevent other parsers
from being loaded.
Returns
-------
None
"""
eps = entry_points(group="paperless_ngx.parsers")
for ep in eps:
try:
parser_class = ep.load()
except Exception:
logger.exception(
"Failed to load parser entrypoint '%s' — skipping.",
ep.name,
)
continue
missing = [
attr for attr in _REQUIRED_ATTRS if not hasattr(parser_class, attr)
]
if missing:
logger.warning(
"Parser loaded from entrypoint '%s' is missing required "
"attributes %r — skipping.",
ep.name,
missing,
)
continue
self._external.append(parser_class)
logger.info(
"Loaded third-party parser '%s' v%s by %s (entrypoint: '%s').",
parser_class.name,
parser_class.version,
parser_class.author,
ep.name,
)
# ------------------------------------------------------------------
# Summary logging
# ------------------------------------------------------------------
def log_summary(self) -> None:
"""Log a startup summary of all registered parsers.
Built-in parsers are listed first, followed by any external parsers
discovered from entrypoints. If no external parsers were found a
short informational message is logged instead of an empty list.
Returns
-------
None
"""
logger.info(
"Built-in parsers (%d):",
len(self._builtins),
)
for cls in self._builtins:
logger.info(
" [built-in] %s v%s%s",
getattr(cls, "name", repr(cls)),
getattr(cls, "version", "unknown"),
getattr(cls, "url", "built-in"),
)
if not self._external:
logger.info("No third-party parsers discovered.")
return
logger.info(
"Third-party parsers (%d):",
len(self._external),
)
for cls in self._external:
logger.info(
" [external] %s v%s by %s — report issues at %s",
getattr(cls, "name", repr(cls)),
getattr(cls, "version", "unknown"),
getattr(cls, "author", "unknown"),
getattr(cls, "url", "unknown"),
)
# ------------------------------------------------------------------
# Parser resolution
# ------------------------------------------------------------------
def get_parser_for_file(
self,
mime_type: str,
filename: str,
path: Path | None = None,
) -> type[ParserProtocol] | None:
"""Return the best parser class for the given file, or None.
All registered parsers (external first, then built-ins) are evaluated
against the file. A parser is eligible if mime_type appears in the dict
returned by its supported_mime_types classmethod, and its score
classmethod returns a non-None integer.
The parser with the highest score wins. When two parsers return the
same score, the one that appears earlier in the evaluation order wins
(external parsers are evaluated before built-ins, giving third-party
packages a chance to override defaults at equal priority).
When an external parser is selected, its identity is logged at INFO
level so operators can trace which package handled a document.
Parameters
----------
mime_type:
The detected MIME type of the file.
filename:
The original filename, including extension.
path:
Optional filesystem path to the file. Forwarded to each
parser's score method.
Returns
-------
type[ParserProtocol] | None
The winning parser class, or None if no parser can handle the file.
"""
best_score: int | None = None
best_parser: type[ParserProtocol] | None = None
# External parsers are placed first so that, at equal scores, an
# external parser wins over a built-in (first-seen policy).
for parser_class in (*self._external, *self._builtins):
if mime_type not in parser_class.supported_mime_types():
continue
score = parser_class.score(mime_type, filename, path)
if score is None:
continue
if best_score is None or score > best_score:
best_score = score
best_parser = parser_class
if best_parser is not None and best_parser in self._external:
logger.info(
"Document handled by third-party parser '%s' v%s%s",
getattr(best_parser, "name", repr(best_parser)),
getattr(best_parser, "version", "unknown"),
getattr(best_parser, "url", "unknown"),
)
return best_parser

View File

@@ -0,0 +1,429 @@
"""
Built-in remote-OCR document parser.
Handles documents by sending them to a configured remote OCR engine
(currently Azure AI Vision / Document Intelligence) and retrieving both
the extracted text and a searchable PDF with an embedded text layer.
When no engine is configured, ``score()`` returns ``None`` so the parser
is effectively invisible to the registry — the tesseract parser handles
these MIME types instead.
"""
from __future__ import annotations
import logging
import shutil
import tempfile
from pathlib import Path
from typing import TYPE_CHECKING
from typing import Self
from django.conf import settings
from paperless.version import __full_version_str__
if TYPE_CHECKING:
import datetime
from types import TracebackType
from paperless.parsers import MetadataEntry
logger = logging.getLogger("paperless.parsing.remote")
_SUPPORTED_MIME_TYPES: dict[str, str] = {
"application/pdf": ".pdf",
"image/png": ".png",
"image/jpeg": ".jpg",
"image/tiff": ".tiff",
"image/bmp": ".bmp",
"image/gif": ".gif",
"image/webp": ".webp",
}
class RemoteEngineConfig:
"""Holds and validates the remote OCR engine configuration."""
def __init__(
self,
engine: str | None,
api_key: str | None = None,
endpoint: str | None = None,
) -> None:
self.engine = engine
self.api_key = api_key
self.endpoint = endpoint
def engine_is_valid(self) -> bool:
"""Return True when the engine is known and fully configured."""
return (
self.engine in ("azureai",)
and self.api_key is not None
and not (self.engine == "azureai" and self.endpoint is None)
)
class RemoteDocumentParser:
"""Parse documents via a remote OCR API (currently Azure AI Vision).
This parser sends documents to a remote engine that returns both
extracted text and a searchable PDF with an embedded text layer.
It does not depend on Tesseract or ocrmypdf.
Class attributes
----------------
name : str
Human-readable parser name.
version : str
Semantic version string, kept in sync with Paperless-ngx releases.
author : str
Maintainer name.
url : str
Issue tracker / source URL.
"""
name: str = "Paperless-ngx Remote OCR Parser"
version: str = __full_version_str__
author: str = "Paperless-ngx Contributors"
url: str = "https://github.com/paperless-ngx/paperless-ngx"
# ------------------------------------------------------------------
# Class methods
# ------------------------------------------------------------------
@classmethod
def supported_mime_types(cls) -> dict[str, str]:
"""Return the MIME types this parser can handle.
The full set is always returned regardless of whether a remote
engine is configured. The ``score()`` method handles the
"am I active?" logic by returning ``None`` when not configured.
Returns
-------
dict[str, str]
Mapping of MIME type to preferred file extension.
"""
return _SUPPORTED_MIME_TYPES
@classmethod
def score(
cls,
mime_type: str,
filename: str,
path: Path | None = None,
) -> int | None:
"""Return the priority score for handling this file, or None.
Returns ``None`` when no valid remote engine is configured,
making the parser invisible to the registry for this file.
When configured, returns 20 — higher than the Tesseract parser's
default of 10 — so the remote engine takes priority.
Parameters
----------
mime_type:
Detected MIME type of the file.
filename:
Original filename including extension.
path:
Optional filesystem path. Not inspected by this parser.
Returns
-------
int | None
20 when the remote engine is configured and the MIME type is
supported, otherwise None.
"""
config = RemoteEngineConfig(
engine=settings.REMOTE_OCR_ENGINE,
api_key=settings.REMOTE_OCR_API_KEY,
endpoint=settings.REMOTE_OCR_ENDPOINT,
)
if not config.engine_is_valid():
return None
if mime_type not in _SUPPORTED_MIME_TYPES:
return None
return 20
# ------------------------------------------------------------------
# Properties
# ------------------------------------------------------------------
@property
def can_produce_archive(self) -> bool:
"""Whether this parser can produce a searchable PDF archive copy.
Returns
-------
bool
Always True — the remote engine always returns a PDF with an
embedded text layer that serves as the archive copy.
"""
return True
@property
def requires_pdf_rendition(self) -> bool:
"""Whether the parser must produce a PDF for the frontend to display.
Returns
-------
bool
Always False — all supported originals are displayable by
the browser (PDF) or handled via the archive copy (images).
"""
return False
# ------------------------------------------------------------------
# Lifecycle
# ------------------------------------------------------------------
def __init__(self, logging_group: object = None) -> None:
settings.SCRATCH_DIR.mkdir(parents=True, exist_ok=True)
self._tempdir = Path(
tempfile.mkdtemp(prefix="paperless-", dir=settings.SCRATCH_DIR),
)
self._logging_group = logging_group
self._text: str | None = None
self._archive_path: Path | None = None
def __enter__(self) -> Self:
return self
def __exit__(
self,
exc_type: type[BaseException] | None,
exc_val: BaseException | None,
exc_tb: TracebackType | None,
) -> None:
logger.debug("Cleaning up temporary directory %s", self._tempdir)
shutil.rmtree(self._tempdir, ignore_errors=True)
# ------------------------------------------------------------------
# Core parsing interface
# ------------------------------------------------------------------
def parse(
self,
document_path: Path,
mime_type: str,
*,
produce_archive: bool = True,
) -> None:
"""Send the document to the remote engine and store results.
Parameters
----------
document_path:
Absolute path to the document file to parse.
mime_type:
Detected MIME type of the document.
produce_archive:
Ignored — the remote engine always returns a searchable PDF,
which is stored as the archive copy regardless of this flag.
"""
config = RemoteEngineConfig(
engine=settings.REMOTE_OCR_ENGINE,
api_key=settings.REMOTE_OCR_API_KEY,
endpoint=settings.REMOTE_OCR_ENDPOINT,
)
if not config.engine_is_valid():
logger.warning(
"No valid remote parser engine is configured, content will be empty.",
)
self._text = ""
return
if config.engine == "azureai":
self._text = self._azure_ai_vision_parse(document_path, config)
# ------------------------------------------------------------------
# Result accessors
# ------------------------------------------------------------------
def get_text(self) -> str | None:
"""Return the plain-text content extracted during parse."""
return self._text
def get_date(self) -> datetime.datetime | None:
"""Return the document date detected during parse.
Returns
-------
datetime.datetime | None
Always None — the remote parser does not detect dates.
"""
return None
def get_archive_path(self) -> Path | None:
"""Return the path to the generated archive PDF, or None."""
return self._archive_path
# ------------------------------------------------------------------
# Thumbnail and metadata
# ------------------------------------------------------------------
def get_thumbnail(self, document_path: Path, mime_type: str) -> Path:
"""Generate a thumbnail image for the document.
Uses the archive PDF produced by the remote engine when available,
otherwise falls back to the original document path (PDF inputs).
Parameters
----------
document_path:
Absolute path to the source document.
mime_type:
Detected MIME type of the document.
Returns
-------
Path
Path to the generated WebP thumbnail inside the temp directory.
"""
# make_thumbnail_from_pdf lives in documents.parsers for now;
# it will move to paperless.parsers.utils when the tesseract
# parser is migrated in a later phase.
from documents.parsers import make_thumbnail_from_pdf
return make_thumbnail_from_pdf(
self._archive_path or document_path,
self._tempdir,
self._logging_group,
)
def get_page_count(
self,
document_path: Path,
mime_type: str,
) -> int | None:
"""Return the number of pages in a PDF document.
Parameters
----------
document_path:
Absolute path to the source document.
mime_type:
Detected MIME type of the document.
Returns
-------
int | None
Page count for PDF inputs, or ``None`` for other MIME types.
"""
if mime_type != "application/pdf":
return None
from paperless.parsers.utils import get_page_count_for_pdf
return get_page_count_for_pdf(document_path, log=logger)
def extract_metadata(
self,
document_path: Path,
mime_type: str,
) -> list[MetadataEntry]:
"""Extract format-specific metadata from the document.
Delegates to the shared pikepdf-based extractor for PDF files.
Returns ``[]`` for all other MIME types.
Parameters
----------
document_path:
Absolute path to the file to extract metadata from.
mime_type:
MIME type of the file. May be ``"application/pdf"`` when
called for the archive version of an image original.
Returns
-------
list[MetadataEntry]
Zero or more metadata entries.
"""
if mime_type != "application/pdf":
return []
from paperless.parsers.utils import extract_pdf_metadata
return extract_pdf_metadata(document_path, log=logger)
# ------------------------------------------------------------------
# Private helpers
# ------------------------------------------------------------------
def _azure_ai_vision_parse(
self,
file: Path,
config: RemoteEngineConfig,
) -> str | None:
"""Send ``file`` to Azure AI Document Intelligence and return text.
Downloads the searchable PDF output from Azure and stores it at
``self._archive_path``. Returns the extracted text content, or
``None`` on failure (the error is logged).
Parameters
----------
file:
Absolute path to the document to analyse.
config:
Validated remote engine configuration.
Returns
-------
str | None
Extracted text, or None if the Azure call failed.
"""
if TYPE_CHECKING:
# Callers must have already validated config via engine_is_valid():
# engine_is_valid() asserts api_key is not None and (for azureai)
# endpoint is not None, so these casts are provably safe.
assert config.endpoint is not None
assert config.api_key is not None
from azure.ai.documentintelligence import DocumentIntelligenceClient
from azure.ai.documentintelligence.models import AnalyzeDocumentRequest
from azure.ai.documentintelligence.models import AnalyzeOutputOption
from azure.ai.documentintelligence.models import DocumentContentFormat
from azure.core.credentials import AzureKeyCredential
client = DocumentIntelligenceClient(
endpoint=config.endpoint,
credential=AzureKeyCredential(config.api_key),
)
try:
with file.open("rb") as f:
analyze_request = AnalyzeDocumentRequest(bytes_source=f.read())
poller = client.begin_analyze_document(
model_id="prebuilt-read",
body=analyze_request,
output_content_format=DocumentContentFormat.TEXT,
output=[AnalyzeOutputOption.PDF],
content_type="application/json",
)
poller.wait()
result_id = poller.details["operation_id"]
result = poller.result()
self._archive_path = self._tempdir / "archive.pdf"
with self._archive_path.open("wb") as f:
for chunk in client.get_analyze_result_pdf(
model_id="prebuilt-read",
result_id=result_id,
):
f.write(chunk)
return result.content
except Exception as e:
logger.error("Azure AI Vision parsing failed: %s", e)
finally:
client.close()
return None

Some files were not shown because too many files have changed in this diff Show More