Compare commits

...

111 Commits

Author SHA1 Message Date
Trenton H
3d0d243057 Merge branch 'dev' into chore/plugin-documentation 2026-03-27 10:18:22 -07:00
Trenton H
9383471fa0 Feature: Transition all checksums to use SHA256 (#12432) 2026-03-26 11:28:02 -07:00
dependabot[bot]
0060b46c8b Chore(deps): Bump requests in the uv group across 1 directory (#12441)
Bumps the uv group with 1 update in the / directory: [requests](https://github.com/psf/requests).


Updates `requests` from 2.32.5 to 2.33.0
- [Release notes](https://github.com/psf/requests/releases)
- [Changelog](https://github.com/psf/requests/blob/main/HISTORY.md)
- [Commits](https://github.com/psf/requests/compare/v2.32.5...v2.33.0)

---
updated-dependencies:
- dependency-name: requests
  dependency-version: 2.33.0
  dependency-type: indirect
  dependency-group: uv
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-26 09:04:20 -07:00
GitHub Actions
b153ec803b Auto translate strings 2026-03-26 14:38:10 +00:00
shamoon
38dba60ceb Enhancement: auto-hide the search bar on mobile (#12404) 2026-03-26 07:36:32 -07:00
shamoon
ae0474450f Chore: logger, response and template sanitization cleanup (#12439) 2026-03-26 07:36:02 -07:00
Trenton H
8efb01010c fix: Don't silently drop the change_groups and switch to a couple slightly more efficient implementations (#12431) 2026-03-26 14:15:42 +00:00
Trenton H
d18bbfa9c3 Chore: Instead of manual temporary directory management, use a context manager (#12430) 2026-03-26 14:05:58 +00:00
GitHub Actions
ec76d3c762 Auto translate strings 2026-03-26 12:45:29 +00:00
shamoon
bdc0a58242 Security: prevent prototype pollution in settings and list view (#12438) 2026-03-26 05:43:49 -07:00
dependabot[bot]
b049ad9626 Chore(deps): Bump cbor2 in the uv group across 1 directory (#12424)
Bumps the uv group with 1 update in the / directory: [cbor2](https://github.com/agronholm/cbor2).


Updates `cbor2` from 5.8.0 to 5.9.0
- [Release notes](https://github.com/agronholm/cbor2/releases)
- [Commits](https://github.com/agronholm/cbor2/compare/5.8.0...5.9.0)

---
updated-dependencies:
- dependency-name: cbor2
  dependency-version: 5.9.0
  dependency-type: indirect
  dependency-group: uv
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-24 09:26:24 -07:00
Trenton H
7a192d021f Moves the date parsing plugin section under the extending section 2026-03-23 12:56:43 -07:00
Trenton H
1e30490a46 Adds a section about how the 2 install types can add external plugins 2026-03-23 12:56:38 -07:00
Trenton H
bd9e529a63 Inital documentation updates for developing a plugin 2026-03-23 12:56:34 -07:00
GitHub Actions
79def8a200 Auto translate strings 2026-03-22 13:55:02 +00:00
Trenton H
701735f6e5 Chore: Drop old signal and unneeded apps, transition to parser registry instead (#12405)
* refactor: switch consumer and callers to ParserRegistry (Phase 4)

Replace all Django signal-based parser discovery with direct registry
calls. Removes `_parser_cleanup`, `parser_is_new_style` shims, and all
old-style isinstance checks. All parser instantiation now uses the
`with parser_class() as parser:` context manager pattern.

- documents/parsers.py: delegate to get_parser_registry(); drop lru_cache
- documents/consumer.py: use registry + context manager; remove shims
- documents/tasks.py: same pattern
- documents/management/commands/document_thumbnails.py: same pattern
- documents/views.py: get_metadata uses context manager
- documents/checks.py: use get_parser_registry().all_parsers()
- paperless/parsers/registry.py: add all_parsers() public method
- tests: update mocks to target documents.consumer.get_parser_class_for_mime_type

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* refactor: drop get_parser_class_for_mime_type; callers use registry directly

All callers now call get_parser_registry().get_parser_for_file() with
the actual filename and path, enabling score() to use file extension
hints. The MIME-only helper is removed.

- consumer.py: passes self.filename + self.working_copy
- tasks.py: passes document.original_filename + document.source_path
- document_thumbnails.py: same pattern
- views.py: passes Path(file).name + Path(file)
- parsers.py: internal helpers inline the registry call with filename=""
- test_parsers.py: drop TestParserDiscovery (was testing mock behavior);
  TestParserAvailability uses registry directly
- test_consumer.py: mocks switch to documents.consumer.get_parser_registry

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* refactor: remove document_consumer_declaration signal infrastructure

Remove the document_consumer_declaration signal that was previously used
for parser registration. Each parser app no longer connects to this signal,
and the signal declaration itself has been removed from documents/signals.

Changes:
- Remove document_consumer_declaration from documents/signals/__init__.py
- Remove ready() methods and signal imports from all parser app configs
- Delete signal shim files (signals.py) from all parser apps:
  - paperless_tesseract/signals.py
  - paperless_text/signals.py
  - paperless_tika/signals.py
  - paperless_mail/signals.py
  - paperless_remote/signals.py

Parser discovery now happens exclusively through the ParserRegistry
system introduced in the previous refactor phases.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* refactor: remove empty paperless_text and paperless_tika Django apps

After parser classes were moved to paperless/parsers/ in the plugin
refactor, these Django apps contained only empty AppConfig classes
with no models, views, tasks, migrations, or other functionality.

- Remove paperless_text and paperless_tika from INSTALLED_APPS
- Delete empty app directories entirely
- Update pyproject.toml test exclusions
- Clean stale mypy baseline entries for moved parser files

paperless_remote app is retained as it contains meaningful system
checks for Azure AI configuration.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Moves the checks and tests to the main application and removes the old applications

* Adds a comment to satisy Sonar

* refactor: remove automatic log_summary() call from get_parser_registry()

The summary was logged once per process, causing it to appear repeatedly
during Docker startup (management commands, web server, each Celery
worker subprocess). External parsers are already announced individually
at INFO when discovered; the full summary is redundant noise.
log_summary() is retained on ParserRegistry for manual/debug use.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Cleans up the duplicate test file/fixture

* Fixes a race condition where webserver threads could race to populate the registry

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-22 06:53:32 -07:00
GitHub Actions
07f54bfdab Auto translate strings 2026-03-21 09:26:23 +00:00
shamoon
0f84af27d0 Merge branch 'main' into dev
# Conflicts:
#	docs/setup.md
#	src-ui/src/main.ts
#	src/documents/tests/test_api_bulk_edit.py
#	src/documents/tests/test_api_custom_fields.py
#	src/documents/tests/test_api_search.py
#	src/documents/tests/test_api_status.py
#	src/documents/tests/test_workflows.py
#	src/paperless_mail/tests/test_api.py
2026-03-21 02:12:19 -07:00
shamoon
9646b8c67d Bump version to 2.20.13 2026-03-21 01:50:04 -07:00
shamoon
e590d7df69 Merge branch 'release/v2.20.x' 2026-03-21 01:49:32 -07:00
shamoon
cc71aad058 Fix: suggest corrections only if visible results 2026-03-21 01:24:23 -07:00
shamoon
3cbdf5d0b7 Fix: require view permission for more-like search 2026-03-21 01:20:59 -07:00
shamoon
f84e0097e5 Fix validate document link targets 2026-03-21 00:55:36 -07:00
shamoon
7dbf8bdd4a Fix: enforce permissions when attaching accounts to mail rules 2026-03-21 00:44:28 -07:00
github-actions[bot]
d2a752a196 Documentation: Add v2.20.12 changelog (#12407)
* Changelog v2.20.12 - GHA

* Update changelog for version 2.20.12

Added security advisory for GHSA-96jx-fj7m-qh6x and fixed workflow saves and usermod/groupmod issues.

---------

Co-authored-by: github-actions <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: shamoon <4887959+shamoon@users.noreply.github.com>
2026-03-20 19:23:06 -07:00
shamoon
2cb155e717 Bump version to 2.20.12 2026-03-20 15:47:37 -07:00
shamoon
9e9fc6213c Resolve GHSA-96jx-fj7m-qh6x 2026-03-20 15:39:15 -07:00
Trenton H
a9756f9462 Chore: Convert Tesseract parser to plugin style (#12403)
* Move tesseract parser, tests, and samples to paperless.parsers

Relocates files in preparation for the Phase 3 Protocol-based parser
refactor, preserving full git history via rename.

- src/paperless_tesseract/parsers.py -> src/paperless/parsers/tesseract.py
- src/paperless_tesseract/tests/test_parser.py -> src/paperless/tests/parsers/test_tesseract_parser.py
- src/paperless_tesseract/tests/test_parser_custom_settings.py -> src/paperless/tests/parsers/test_tesseract_custom_settings.py
- src/paperless_tesseract/tests/samples/* -> src/paperless/tests/samples/tesseract/
- Moves RUF001 suppression from broad per-file pyproject.toml ignore to inline noqa comments on the two affected lines

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Refactor RasterisedDocumentParser to ParserProtocol interface

- Add RasterisedDocumentParser to registry.register_defaults()
- Update parser class: remove DocumentParser inheritance, add Protocol
  class attrs/classmethods/properties, context-manager lifecycle
- Add read_file_handle_unicode_errors() to shared parsers/utils.py
- Replace inline unicode-error-handling with shared utility call

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Update tesseract signals.py to import from new parser location

RasterisedDocumentParser moved to paperless.parsers.tesseract; update
the lazy import in signals.get_parser so the signal-based consumer
declaration continues to work during the registry transition. Pop
logging_group and progress_callback kwargs for constructor compatibility.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* tests: rewrite test_tesseract_parser to pytest style with typed fixtures

- Converts all tests from Django TestCase to pytest-style classes
- Adds tesseract_samples_dir, null_app_config, tesseract_parser, and
  make_tesseract_parser fixtures in conftest.py; all DB-free except
  TestOcrmypdfParameters which uses @pytest.mark.django_db
- Defines MakeTesseractParser type alias in conftest.py for autocomplete
- Fixes FBT001 (boolean positional args) by making bool params
  keyword-only with * separator in parametrize test signatures
- Adds type annotations to all fixture parameters for IDE support
- Uses pytest.param(..., id="...") throughout; pytest-mock for patching

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(types): fully annotate paperless/parsers/tesseract.py

Fixes all mypy and pyrefly errors in the new parser file:

- Add missing type annotations to is_image, has_alpha, get_dpi,
  calculate_a4_dpi, construct_ocrmypdf_parameters, post_process_text
- Narrow Path-only (no str) for image helper args; convert to str when
  building list[str] args for run_subprocess
- Annotate ocrmypdf_args as dict[str, Any] so operator expressions on
  its values type-check and ocrmypdf.ocr(**args) resolves cleanly
- Declare text: str | None = None at top of extract_text to unify
  all assignments to the same type across both branches
- Import Any from typing

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Fixes isort

* fix: add RasterisedDocumentParser to new-style parser shim checks

The new RasterisedDocumentParser uses __enter__/__exit__ for resource
management instead of cleanup(). Update all existing new-style shims to
include it in the isinstance checks:

- documents/consumer.py: _parser_cleanup(), parser_is_new_style
- documents/tasks.py: parser_is_new_style, finally cleanup branch
  (also adds RemoteDocumentParser which was missing from the latter)
- documents/management/commands/document_thumbnails.py: adds new-style
  handling from scratch (enter/exit + 2-arg get_thumbnail signature)

Fix stale import paths in three test files that were still importing
from paperless_tesseract.parsers instead of paperless.parsers.tesseract.

Fix two registry tests that used application/pdf as a proxy for "no
handler" — now that RasterisedDocumentParser is registered, PDF always
has a handler, so switch to a truly unsupported MIME type.

Signal infrastructure and shims remain intact; this is plumbing only.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* One missed import (cherry pick?)

* Adds a no cover for a special case of handling unicode errors in PDF metadata

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-20 12:46:07 -07:00
Trenton H
c2b8b22fb4 Chore: Convert mail parser to plugin style (#12397)
* Refactor(mail): rename paperless_mail/parsers.py → paperless/parsers/mail.py

Preserve git history for MailDocumentParser by committing the rename
separately before editing, following the project convention.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Refactor(mail): move mail parser tests to paperless/tests/parsers/

Move test_parsers.py → test_mail_parser.py and test_parsers_live.py →
test_mail_parser_live.py alongside the other built-in parser tests,
preserving git history before editing. Update MailDocumentParser import
to the new canonical location.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Chore: move mail parser sample files to paperless/tests/samples/mail/

Relocate all mail test fixtures from src/paperless_mail/tests/samples/ to
src/paperless/tests/samples/mail/ ahead of the parser plugin refactor.
Add the new path to the codespell skip list to prevent false-positive
spell corrections in binary/fixture email files.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Feat(tests): add mail parser fixtures to paperless/tests/parsers/conftest.py

Add mail_samples_dir, per-file sample fixtures, and mail_parser
(context-manager style) to mirror the old paperless_mail conftest
but rooted at the new samples/mail/ location.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Feat(parsers): migrate MailDocumentParser to ParserProtocol

Move the mail parser from paperless_mail/parsers.py to
paperless/parsers/mail.py and refactor it to implement ParserProtocol:

- Class-level name/version/author/url attributes
- supported_mime_types() and score() classmethods (score=20)
- can_produce_archive=False, requires_pdf_rendition=True
- Context manager lifecycle (__enter__/__exit__)
- New parse() signature without mailrule_id kwarg; consumer sets
  parser.mailrule_id before calling parse() instead
- get_text()/get_date()/get_archive_path() accessor methods
- extract_metadata() returning email headers and attachment info

Register MailDocumentParser in the ParserRegistry alongside Text and
Tika parsers. Update consumer, signals, and all import sites to use
the new location. Update tests to use the new accessor API, patch
paths, and context-manager fixture.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Fix(parsers): pop legacy constructor args in mail signal wrapper

MailDocumentParser.__init__ takes no constructor args in the new
protocol. Update the get_parser() signal wrapper to pop logging_group
and progress_callback (passed by the legacy consumer dispatch path)
before instantiating — the same pattern used by TextDocumentParser.

Also update test_mail_parser_receives_mailrule to use the real signal
wrapper (mail_get_parser) instead of MailDocumentParser directly, so
the test exercises the actual dispatch path and matches the new
parse() call signature (no mailrule kwarg).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Bumps this so we can run

* Fixes location of the fixture

* Removes fixtures which were duplicated

* Feat(parsers): add ParserContext and configure() to ParserProtocol

Replace the ad-hoc mailrule_id attribute assignment with a typed,
immutable ParserContext dataclass and a configure() method on the
Protocol:

- ParserContext(frozen=True, slots=True) lives in paperless/parsers/
  alongside ParserProtocol and MetadataEntry; currently carries only
  mailrule_id but is designed to grow with output_type, ocr_mode, and
  ocr_language in a future phase (decoupling parsers from settings.*)
- ParserProtocol.configure(context: ParserContext) -> None is the
  extension point; no-op by default
- MailDocumentParser.configure() reads mailrule_id into _mailrule_id
- TextDocumentParser and TikaDocumentParser implement a no-op configure()
- Consumer calls document_parser.configure(ParserContext(...)) before
  parse(), replacing the isinstance(parser, MailDocumentParser) guard
  and the direct attribute mutation

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Feat(parsers): call configure(ParserContext()) in update_document task

Apply the same new-style parser shim pattern as the consumer to
update_document_content_maybe_archive_file:

- Call __enter__ for Text/Tika parsers after instantiation
- Call configure(ParserContext()) before parse() for all new-style parsers
  (mailrule_id is not available here — this is a re-process of an
  existing document, so the default empty context is correct)
- Call parse(path, mime_type) with 2 args for new-style parsers
- Call get_thumbnail(path, mime_type) with 2 args for new-style parsers
- Call __exit__ instead of cleanup() in the finally block

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Fix(tests): add configure() to DummyParser and missing-method parametrize

ParserProtocol now requires configure(context: ParserContext) -> None.
Update DummyParser in test_registry.py to implement it, and add
'missing-configure' to the test_partial_compliant_fails_isinstance
parametrize list so the new method is covered by the negative test.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Cleans up the reprocess task and generally reduces duplicate of classes

* Corrects the score return

* Updates so we can report a page count for these parsers, assuming we do have an archive produced when called

* Increases test coverage

* One more coverage

* Updates typing

* Updates typing

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-20 09:22:18 -07:00
shamoon
d671e34559 Documentation: OIDC token_auth_method setting for v3 (#12398) 2026-03-19 22:03:00 -07:00
dependabot[bot]
f7c12d550a Chore(deps): Bump tinytag in the uv group across 1 directory (#12396)
Bumps the uv group with 1 update in the / directory: [tinytag](https://github.com/tinytag/tinytag).


Updates `tinytag` from 2.2.0 to 2.2.1
- [Release notes](https://github.com/tinytag/tinytag/releases)
- [Commits](https://github.com/tinytag/tinytag/compare/2.2.0...2.2.1)

---
updated-dependencies:
- dependency-name: tinytag
  dependency-version: 2.2.1
  dependency-type: indirect
  dependency-group: uv
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-19 11:02:16 -07:00
Trenton H
68fc898042 Fix: Resolve more instances of tests which mutated global states (#12395) 2026-03-19 10:05:07 -07:00
Trenton H
2cbe6ae892 Feature: Convert remote AI parser to plugin system (#12334)
* Refactor: move remote parser, test, and sample to paperless.parsers

Relocates three files to their new homes in the parser plugin system:

- src/paperless_remote/parsers.py
    → src/paperless/parsers/remote.py
- src/paperless_remote/tests/test_parser.py
    → src/paperless/tests/parsers/test_remote_parser.py
- src/paperless_remote/tests/samples/simple-digital.pdf
    → src/paperless/tests/samples/remote/simple-digital.pdf

Content and imports will be updated in the follow-up commit that
rewrites the parser to the new ParserProtocol interface.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Feature: migrate RemoteDocumentParser to ParserProtocol interface

Rewrites the remote OCR parser to the new plugin system contract:

- `supported_mime_types()` is now a classmethod that always returns the
  full set of 7 MIME types; the old instance-method hack (returning {}
  when unconfigured) is removed
- `score()` classmethod returns None when no remote engine is configured
  (making the parser invisible to the registry), and 20 when active —
  higher than the tesseract default of 10 so the remote engine takes
  priority when both are available
- No longer inherits from RasterisedDocumentParser; inherits no parser
  class at all — just implements the protocol directly
- `can_produce_archive = True`; `requires_pdf_rendition = False`
- `_azure_ai_vision_parse()` takes explicit config arg; API client
  created and closed within the method
- `get_page_count()` returns the PDF page count for application/pdf,
  delegating to the new `get_page_count_for_pdf()` utility
- `extract_metadata()` delegates to `extract_pdf_metadata()` for PDFs;
  returns [] for all other MIME types

New files:
- `src/paperless/parsers/utils.py` — shared `extract_pdf_metadata()` and
  `get_page_count_for_pdf()` utilities (pikepdf-based); both the remote
  and tesseract parsers will use these going forward
- `src/paperless/tests/parsers/test_remote_parser.py` — 42 pytest-style
  tests using pytest-django `settings` and pytest-mock `mocker` fixtures
- `src/paperless/tests/parsers/conftest.py` — remote parser instance,
  sample-file, and settings-helper fixtures

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Refactor: use fixture factory and usefixtures in remote parser tests

- `_make_azure_mock` helper promoted to `make_azure_mock` factory fixture
  in conftest.py; tests call `make_azure_mock()` or
  `make_azure_mock("custom text")` instead of a module-level function
- `azure_settings` and `no_engine_settings` applied via
  `@pytest.mark.usefixtures` wherever their value is not referenced
  inside the test body; `TestRemoteParserParseError` marked at the class
  level since all three tests need the same setting

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Refactor: improve remote parser test fixture structure

- make_azure_mock moved from conftest.py back into test_remote_parser.py;
  it is specific to that module and does not belong in shared fixtures
- azure_client fixture composes azure_settings + make_azure_mock + patch
  in one step; tests no longer repeat the mocker.patch call or carry an
  unused azure_settings parameter
- failing_azure_client fixture similarly composes azure_settings + patch
  with a RuntimeError side effect; TestRemoteParserParseError now only
  receives the mock it actually uses
- All @pytest.mark.parametrize calls use pytest.param with explicit ids
  (pdf, png, jpeg, ...) for readable test output

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Refactor: wire RemoteDocumentParser into consumer and fix signals

- paperless_remote/signals.py: import from paperless.parsers.remote
  (new location after git mv). supported_mime_types() is now a
  classmethod that always returns the full set, so get_supported_mime_types()
  in the signal layer explicitly checks RemoteEngineConfig validity and
  returns {} when unconfigured — preserving the old behaviour where an
  unconfigured remote parser does not register for any MIME types.

- documents/consumer.py: extend the _parser_cleanup() shim, parse()
  dispatch, and get_thumbnail() dispatch to include RemoteDocumentParser
  alongside TextDocumentParser. Both new-style parsers use __exit__
  for cleanup and take (document_path, mime_type) without a file_name
  argument.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Refactor: fix type errors in remote parser and signals

- remote.py: add `if TYPE_CHECKING: assert` guards before the Azure
  client construction to narrow config.endpoint and config.api_key from
  str|None to str. The narrowing is safe: engine_is_valid() guarantees
  both are non-None when it returns True (api_key explicitly; endpoint
  via `not (engine=="azureai" and endpoint is None)` for the only valid
  engine). Asserts are wrapped in TYPE_CHECKING so they carry zero
  runtime cost.

- signals.py: add full type annotations — return types, Any-typed
  sender parameter, and explicit logging_group argument replacing *args.
  Add `from __future__ import annotations` for consistent annotation style.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Fix: get_parser factory forwards logging_group, drops progress_callback

consumer.py calls parser_class(logging_group, progress_callback=...).
RemoteDocumentParser.__init__ accepts logging_group but not
progress_callback, so only the latter is dropped — matching the pattern
established by the TextDocumentParser signals shim.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Fix: text parser get_parser forwards logging_group, drops progress_callback

TextDocumentParser.__init__ accepts logging_group: object = None, same
as RemoteDocumentParser. The old shim incorrectly dropped it; fix to
forward it as a positional arg and only drop progress_callback.
Add type annotations and from __future__ import annotations for
consistency with the remote parser signals shim.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-18 16:19:46 -07:00
Trenton H
b0bb31654f Bumps zensical to 0.0.26 to resolve the wheel building it tries to do (#12392) 2026-03-18 22:53:34 +00:00
Trenton H
0f7c02de5e Fix: test: add regression test for workflow save clobbering filename (#12390)
Add test_workflow_document_updated_does_not_overwrite_filename to
verify that run_workflows (DOCUMENT_UPDATED path) does not revert a
DB filename that was updated by a concurrent bulk_update_documents
task's update_filename_and_move_files call.

The test replicates the race window by:
  - Updating the DB filename directly (simulating BUD-1 completing)
  - Mocking refresh_from_db so the stale in-memory filename persists
  - Asserting the DB filename is not clobbered after run_workflows

Relates to: https://github.com/paperless-ngx/paperless-ngx/issues/12386

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-18 13:31:09 -07:00
Trenton H
95dea787f2 Fix: don't try to usermod/groupmod when non-root + update docs (#12365) (#12391) 2026-03-18 10:38:45 -07:00
shamoon
b6501b0c47 Fix: avoid moving files if already moved (#12389) 2026-03-18 09:51:48 -07:00
dependabot[bot]
d162c83eb7 Chore(deps): Bump ujson from 5.11.0 to 5.12.0 (#12387)
Bumps [ujson](https://github.com/ultrajson/ultrajson) from 5.11.0 to 5.12.0.
- [Release notes](https://github.com/ultrajson/ultrajson/releases)
- [Commits](https://github.com/ultrajson/ultrajson/compare/5.11.0...5.12.0)

---
updated-dependencies:
- dependency-name: ujson
  dependency-version: 5.12.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-18 09:19:36 -07:00
shamoon
d3ac75741f Update serialisers.py 2026-03-18 07:09:51 -07:00
shamoon
87ebd13abc Fix: remove pagination from document notes api spec (#12388) 2026-03-18 06:48:05 -07:00
dependabot[bot]
3abff21d1f Chore(deps): Bump pyasn1 from 0.6.2 to 0.6.3 (#12370)
Bumps [pyasn1](https://github.com/pyasn1/pyasn1) from 0.6.2 to 0.6.3.
- [Release notes](https://github.com/pyasn1/pyasn1/releases)
- [Changelog](https://github.com/pyasn1/pyasn1/blob/main/CHANGES.rst)
- [Commits](https://github.com/pyasn1/pyasn1/compare/v0.6.2...v0.6.3)

---
updated-dependencies:
- dependency-name: pyasn1
  dependency-version: 0.6.3
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-18 06:51:56 +00:00
dependabot[bot]
0a08499fc7 Chore(deps): Bump https://github.com/astral-sh/ruff-pre-commit (#12371)
Bumps the pre-commit-dependencies group with 1 update: [https://github.com/astral-sh/ruff-pre-commit](https://github.com/astral-sh/ruff-pre-commit).


Updates `https://github.com/astral-sh/ruff-pre-commit` from v0.15.5 to 0.15.6
- [Release notes](https://github.com/astral-sh/ruff-pre-commit/releases)
- [Commits](https://github.com/astral-sh/ruff-pre-commit/compare/v0.15.5...v0.15.6)

---
updated-dependencies:
- dependency-name: https://github.com/astral-sh/ruff-pre-commit
  dependency-version: 0.15.6
  dependency-type: direct:production
  dependency-group: pre-commit-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-18 06:25:40 +00:00
dependabot[bot]
330ee696a8 Chore(deps): Bump the actions group with 2 updates (#12377)
Bumps the actions group with 2 updates: [docker/metadata-action](https://github.com/docker/metadata-action) and [docker/build-push-action](https://github.com/docker/build-push-action).


Updates `docker/metadata-action` from 5.10.0 to 6.0.0
- [Release notes](https://github.com/docker/metadata-action/releases)
- [Commits](https://github.com/docker/metadata-action/compare/v5.10.0...v6.0.0)

Updates `docker/build-push-action` from 6.19.2 to 7.0.0
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v6.19.2...v7.0.0)

---
updated-dependencies:
- dependency-name: docker/metadata-action
  dependency-version: 6.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: actions
- dependency-name: docker/build-push-action
  dependency-version: 7.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-18 06:18:11 +00:00
dependabot[bot]
b98697ab8b Chore(deps): Bump the utilities-patch group across 1 directory with 2 updates (#12382)
Bumps the utilities-patch group with 2 updates in the / directory: [llama-index-core](https://github.com/run-llama/llama_index) and [zensical](https://github.com/zensical/zensical).


Updates `llama-index-core` from 0.14.15 to 0.14.16
- [Release notes](https://github.com/run-llama/llama_index/releases)
- [Changelog](https://github.com/run-llama/llama_index/blob/main/CHANGELOG.md)
- [Commits](https://github.com/run-llama/llama_index/compare/v0.14.15...v0.14.16)

Updates `zensical` from 0.0.24 to 0.0.25
- [Release notes](https://github.com/zensical/zensical/releases)
- [Commits](https://github.com/zensical/zensical/compare/v0.0.24...v0.0.25)

---
updated-dependencies:
- dependency-name: llama-index-core
  dependency-version: 0.14.16
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: utilities-patch
- dependency-name: zensical
  dependency-version: 0.0.25
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: utilities-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-17 22:31:04 -07:00
dependabot[bot]
7e94dd8208 Chore(deps): Bump openai in the utilities-minor group (#12379)
Bumps the utilities-minor group with 1 update: [openai](https://github.com/openai/openai-python).


Updates `openai` from 2.24.0 to 2.26.0
- [Release notes](https://github.com/openai/openai-python/releases)
- [Changelog](https://github.com/openai/openai-python/blob/main/CHANGELOG.md)
- [Commits](https://github.com/openai/openai-python/compare/v2.24.0...v2.26.0)

---
updated-dependencies:
- dependency-name: openai
  dependency-version: 2.26.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: utilities-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-18 04:29:01 +00:00
dependabot[bot]
79da72f69c Chore(deps-dev): Bump types-python-dateutil (#12380)
Bumps [types-python-dateutil](https://github.com/typeshed-internal/stub_uploader) from 2.9.0.20260124 to 2.9.0.20260305.
- [Commits](https://github.com/typeshed-internal/stub_uploader/commits)

---
updated-dependencies:
- dependency-name: types-python-dateutil
  dependency-version: 2.9.0.20260305
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-18 04:07:06 +00:00
dependabot[bot]
261ae9d8ce Chore(deps): Update django-allauth[mfa,socialaccount] requirement (#12381)
Updates the requirements on [django-allauth[mfa,socialaccount]](https://github.com/sponsors/pennersr) to permit the latest version.
- [Commits](https://github.com/sponsors/pennersr/commits)

---
updated-dependencies:
- dependency-name: django-allauth[mfa,socialaccount]
  dependency-version: 65.15.0
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-18 03:55:03 +00:00
dependabot[bot]
0e2c191524 Chore(deps-dev): Bump the frontend-jest-dependencies group (#12374)
Bumps the frontend-jest-dependencies group in /src-ui with 2 updates: [jest](https://github.com/jestjs/jest/tree/HEAD/packages/jest) and [jest-environment-jsdom](https://github.com/jestjs/jest/tree/HEAD/packages/jest-environment-jsdom).


Updates `jest` from 30.2.0 to 30.3.0
- [Release notes](https://github.com/jestjs/jest/releases)
- [Changelog](https://github.com/jestjs/jest/blob/main/CHANGELOG.md)
- [Commits](https://github.com/jestjs/jest/commits/v30.3.0/packages/jest)

Updates `jest-environment-jsdom` from 30.2.0 to 30.3.0
- [Release notes](https://github.com/jestjs/jest/releases)
- [Changelog](https://github.com/jestjs/jest/blob/main/CHANGELOG.md)
- [Commits](https://github.com/jestjs/jest/commits/v30.3.0/packages/jest-environment-jsdom)

---
updated-dependencies:
- dependency-name: jest
  dependency-version: 30.3.0
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: frontend-jest-dependencies
- dependency-name: jest-environment-jsdom
  dependency-version: 30.3.0
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: frontend-jest-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-18 01:15:52 +00:00
dependabot[bot]
ab4656692d Chore(deps): Bump @ng-select/ng-select (#12373)
Bumps the frontend-angular-dependencies group in /src-ui with 1 update: [@ng-select/ng-select](https://github.com/ng-select/ng-select).


Updates `@ng-select/ng-select` from 21.4.1 to 21.5.2
- [Release notes](https://github.com/ng-select/ng-select/releases)
- [Changelog](https://github.com/ng-select/ng-select/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ng-select/ng-select/compare/v21.4.1...v21.5.2)

---
updated-dependencies:
- dependency-name: "@ng-select/ng-select"
  dependency-version: 21.5.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: frontend-angular-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-18 01:07:48 +00:00
dependabot[bot]
03e2c352c2 Chore(deps-dev): Bump @types/node from 25.3.3 to 25.4.0 in /src-ui (#12376)
Bumps [@types/node](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/HEAD/types/node) from 25.3.3 to 25.4.0.
- [Release notes](https://github.com/DefinitelyTyped/DefinitelyTyped/releases)
- [Commits](https://github.com/DefinitelyTyped/DefinitelyTyped/commits/HEAD/types/node)

---
updated-dependencies:
- dependency-name: "@types/node"
  dependency-version: 25.4.0
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-18 00:56:32 +00:00
dependabot[bot]
2d46ed9692 Chore(deps-dev): Bump the frontend-eslint-dependencies group (#12375)
Bumps the frontend-eslint-dependencies group in /src-ui with 4 updates: [@typescript-eslint/eslint-plugin](https://github.com/typescript-eslint/typescript-eslint/tree/HEAD/packages/eslint-plugin), [@typescript-eslint/parser](https://github.com/typescript-eslint/typescript-eslint/tree/HEAD/packages/parser), [@typescript-eslint/utils](https://github.com/typescript-eslint/typescript-eslint/tree/HEAD/packages/utils) and [eslint](https://github.com/eslint/eslint).


Updates `@typescript-eslint/eslint-plugin` from 8.54.0 to 8.57.0
- [Release notes](https://github.com/typescript-eslint/typescript-eslint/releases)
- [Changelog](https://github.com/typescript-eslint/typescript-eslint/blob/main/packages/eslint-plugin/CHANGELOG.md)
- [Commits](https://github.com/typescript-eslint/typescript-eslint/commits/v8.57.0/packages/eslint-plugin)

Updates `@typescript-eslint/parser` from 8.54.0 to 8.57.0
- [Release notes](https://github.com/typescript-eslint/typescript-eslint/releases)
- [Changelog](https://github.com/typescript-eslint/typescript-eslint/blob/main/packages/parser/CHANGELOG.md)
- [Commits](https://github.com/typescript-eslint/typescript-eslint/commits/v8.57.0/packages/parser)

Updates `@typescript-eslint/utils` from 8.54.0 to 8.57.0
- [Release notes](https://github.com/typescript-eslint/typescript-eslint/releases)
- [Changelog](https://github.com/typescript-eslint/typescript-eslint/blob/main/packages/utils/CHANGELOG.md)
- [Commits](https://github.com/typescript-eslint/typescript-eslint/commits/v8.57.0/packages/utils)

Updates `eslint` from 10.0.2 to 10.0.3
- [Release notes](https://github.com/eslint/eslint/releases)
- [Commits](https://github.com/eslint/eslint/compare/v10.0.2...v10.0.3)

---
updated-dependencies:
- dependency-name: "@typescript-eslint/eslint-plugin"
  dependency-version: 8.57.0
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: frontend-eslint-dependencies
- dependency-name: "@typescript-eslint/parser"
  dependency-version: 8.57.0
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: frontend-eslint-dependencies
- dependency-name: "@typescript-eslint/utils"
  dependency-version: 8.57.0
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: frontend-eslint-dependencies
- dependency-name: eslint
  dependency-version: 10.0.3
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: frontend-eslint-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-18 00:41:24 +00:00
GitHub Actions
8d23d17ae8 Auto translate strings 2026-03-17 22:44:54 +00:00
Trenton H
aea2927a02 Feature: Convert Tika parser to the plugin system (#12333)
* Chore: move Tika parser and tests to paperless/

Move TikaDocumentParser and its tests to the canonical parser package
location, matching the pattern established for TextDocumentParser:

- src/paperless_tika/parsers.py → src/paperless/parsers/tika.py
- src/paperless_tika/tests/test_tika_parser.py → src/paperless/tests/parsers/test_tika_parser.py
- src/paperless_tika/tests/samples/ → src/paperless/tests/samples/tika/

Merge tika fixtures (tika_parser, sample_odt_file, sample_docx_file,
sample_doc_file, sample_broken_odt) into the shared parsers conftest.
Remove the now-empty src/paperless_tika/tests/conftest.py.

Content is unchanged — this commit is rename-only so git history is
preserved on the moved files.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Feature: Phase 3 — migrate TikaDocumentParser to ParserProtocol

Refactor TikaDocumentParser to satisfy ParserProtocol without subclassing
the legacy DocumentParser ABC:

- Add ClassVars: name, version, author, url
- Add supported_mime_types() classmethod (12 Office/ODF/RTF MIME types)
- Add score() classmethod — returns None when TIKA_ENABLED is False, 10 otherwise
- can_produce_archive = False (PDF is for display, not an OCR archive)
- requires_pdf_rendition = True (Office formats need PDF for browser display)
- __enter__/__exit__ via ExitStack: TikaClient opened once per parser
  lifetime and shared across parse() and extract_metadata() calls
- extract_metadata() falls back to a short-lived TikaClient when called
  outside a context manager (legacy view-layer metadata path)
- _convert_to_pdf() uses OutputTypeConfig() to honour the database-stored
  ApplicationConfiguration before falling back to the env-var setting
- Rename convert_to_pdf → _convert_to_pdf (private helper)

Update paperless_tika/signals.py shim to import from the new module path
and drop the legacy logging_group/progress_callback kwargs.

Update documents/consumer.py to extend the existing TextDocumentParser
special cases to also cover TikaDocumentParser (parse/get_thumbnail
signatures, __exit__ cleanup).

Add TestTikaParserRegistryInterface (7 tests) covering score(), properties,
and ParserProtocol isinstance check.  Update existing tests to use the new
accessor API (get_text, get_date, get_archive_path, _convert_to_pdf).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Fix: update remaining imports and move live Tika tests after parser migration

- src/documents/tests/test_parsers.py: import TikaDocumentParser from
  paperless.parsers.tika (old paperless_tika.parsers no longer exists)
- git mv paperless_tika/tests/test_live_tika.py →
  paperless/tests/parsers/test_live_tika.py to co-locate all Tika tests
  with the parser; update import and replace old attribute API
  (tika_parser.text/.archive_path) with accessor methods
  (get_text/get_archive_path)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Fix: satisfy mypy and pyrefly for TikaDocumentParser

Use a TYPE_CHECKING-guarded assert to narrow self._tika_client from
TikaClient | None to TikaClient at the point of use in parse().  The
assert is visible to type checkers (TYPE_CHECKING=True) so both mypy
and pyrefly accept the subsequent attribute accesses without error;
at runtime TYPE_CHECKING is False so the assert never executes and no
ruff S101 suppression is required.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Fix: require context manager for TikaDocumentParser; clean up client lifecycle

- consumer.py: call __enter__ for new-style parsers so _tika_client and
  _gotenberg_client are set before parse() is invoked
- views.py: use `with parser` (via nullcontext for old-style parsers) in
  get_metadata so extract_metadata always runs inside a context manager
- tika.py: GotenbergClient added to ExitStack alongside TikaClient;
  inline client creation removed from extract_metadata and _convert_to_pdf;
  __exit__ uses ExitStack.close() instead of __exit__ pass-through

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-17 15:43:28 -07:00
shamoon
a86c9d32fe Fix: fix file button hover color in dark mode (#12367) 2026-03-17 09:26:41 -07:00
GitHub Actions
d53dcad4f6 Auto translate strings 2026-03-17 15:24:49 +00:00
shamoon
736b08ad09 Tweak: use cancel instead of discard for app config button 2026-03-17 08:23:08 -07:00
shamoon
ca5879a54e Fix one test with explicit override 2026-03-16 23:03:31 -07:00
shamoon
4d4f30b5f8 Security: validate outbound llm URLs and block internal endpoints 2026-03-16 22:58:16 -07:00
Trenton H
85fecac401 Fix: don't try to usermod/groupmod when non-root + update docs (#12365) 2026-03-17 05:15:03 +00:00
shamoon
7942edfdf4 Fixhancement: only offer basic auth for appropriate requests (#12362) 2026-03-16 22:07:12 -07:00
Trenton H
470018c011 Chore: Mocks the celery and Redis pings so we don't wait for their timeout each time (#12354) 2026-03-16 20:12:17 +00:00
dependabot[bot]
54679a093a Chore(deps): Bump pyopenssl from 25.3.0 to 26.0.0 (#12363)
Bumps [pyopenssl](https://github.com/pyca/pyopenssl) from 25.3.0 to 26.0.0.
- [Changelog](https://github.com/pyca/pyopenssl/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/pyopenssl/compare/25.3.0...26.0.0)

---
updated-dependencies:
- dependency-name: pyopenssl
  dependency-version: 26.0.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-16 10:36:00 -07:00
dependabot[bot]
58ebcc21be Chore(deps): Bump pyjwt from 2.10.1 to 2.12.0 (#12335)
Bumps [pyjwt](https://github.com/jpadilla/pyjwt) from 2.10.1 to 2.12.0.
- [Release notes](https://github.com/jpadilla/pyjwt/releases)
- [Changelog](https://github.com/jpadilla/pyjwt/blob/master/CHANGELOG.rst)
- [Commits](https://github.com/jpadilla/pyjwt/compare/2.10.1...2.12.0)

---
updated-dependencies:
- dependency-name: pyjwt
  dependency-version: 2.12.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-16 09:57:23 -07:00
Trenton H
1caa3eb8aa Chore: Disables the system checks for management commands in tests and when unnecessary (#12332) 2026-03-16 15:10:35 +00:00
shamoon
866c9fd858 Fix: correct merge bulk edit indentation 2026-03-15 23:50:54 -07:00
shamoon
2bb4af2be6 Change: sort custom fields alphabetically by default (#12358) 2026-03-15 22:52:02 -07:00
GitHub Actions
6b8ff9763d Auto translate strings 2026-03-16 01:52:49 +00:00
shamoon
6034f17c87 Format changelog 2026-03-15 18:51:06 -07:00
shamoon
48cd1cce6a Merge branch 'main' into dev 2026-03-15 18:50:42 -07:00
github-actions[bot]
1e00ad5f30 Documentation: Add v2.20.11 changelog (#12356)
* Changelog v2.20.11 - GHA

* Update changelog for version 2.20.11

Added security advisory and fixed dropdown list issues.

---------

Co-authored-by: github-actions <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: shamoon <4887959+shamoon@users.noreply.github.com>
2026-03-15 18:47:18 -07:00
shamoon
5f26c01c6f Bump version to 2.20.11 2026-03-15 17:16:11 -07:00
shamoon
92e133eeb0 Merge branch 'release/v2.20.x' 2026-03-15 17:15:20 -07:00
shamoon
06b2d5102c Fix GHSA-59xh-5vwx-4c4q 2026-03-15 17:13:08 -07:00
Trenton H
9d69705e26 Feature: Add progress information to the classifier training for a better ux (#12331) 2026-03-14 19:53:52 +00:00
GitHub Actions
01abacab52 Auto translate strings 2026-03-14 05:09:13 +00:00
shamoon
88b8f9b326 Chore: bump Angular dependencies to 21.2.x (#12338) 2026-03-13 22:07:31 -07:00
dependabot[bot]
365ff99934 Bump ocrmypdf from 16.13.0 to 17.3.0 in the document-processing group (#12267)
* Bump ocrmypdf from 16.13.0 to 17.3.0 in the document-processing group

Bumps the document-processing group with 1 update: [ocrmypdf](https://github.com/ocrmypdf/OCRmyPDF).


Updates `ocrmypdf` from 16.13.0 to 17.3.0
- [Release notes](https://github.com/ocrmypdf/OCRmyPDF/releases)
- [Commits](https://github.com/ocrmypdf/OCRmyPDF/compare/v16.13.0...v17.3.0)

---
updated-dependencies:
- dependency-name: ocrmypdf
  dependency-version: 17.3.0
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: document-processing
...

Signed-off-by: dependabot[bot] <support@github.com>

* Updates the argument name for v17

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Trenton H <797416+stumpylog@users.noreply.github.com>
2026-03-13 09:51:21 -07:00
Trenton H
d86cfdb088 Feature: Initial document parser plugin framework (#12294) 2026-03-12 21:53:17 +00:00
shamoon
40255cfdbb Fix: correct dropdown list active color in dark mode (#12328) 2026-03-12 14:06:16 -07:00
dependabot[bot]
c2e1085418 Chore(deps): Bump tornado from 6.5.4 to 6.5.5 (#12327)
Bumps [tornado](https://github.com/tornadoweb/tornado) from 6.5.4 to 6.5.5.
- [Changelog](https://github.com/tornadoweb/tornado/blob/master/docs/releases.rst)
- [Commits](https://github.com/tornadoweb/tornado/compare/v6.5.4...v6.5.5)

---
updated-dependencies:
- dependency-name: tornado
  dependency-version: 6.5.5
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-12 13:44:41 -07:00
Trenton H
ee0d1a3094 Enhancement: Make the StatusConsumer truly async (#12298) 2026-03-12 13:27:35 -07:00
Trenton H
f15394fa5c Fix: Removes the double exec that prevented migrations from running (#12317) 2026-03-12 12:46:12 -07:00
dependabot[bot]
773eb25f7d Chore(deps): Bump the utilities-minor group across 1 directory with 5 updates (#12324)
* Chore(deps): Bump the utilities-minor group across 1 directory with 5 updates

Bumps the utilities-minor group with 5 updates in the / directory:

| Package | From | To |
| --- | --- | --- |
| [drf-spectacular-sidecar](https://github.com/tfranzel/drf-spectacular-sidecar) | `2026.1.1` | `2026.3.1` |
| [filelock](https://github.com/tox-dev/py-filelock) | `3.20.3` | `3.25.0` |
| [scikit-learn](https://github.com/scikit-learn/scikit-learn) | `1.7.2` | `1.8.0` |
| [faker](https://github.com/joke2k/faker) | `40.5.1` | `40.8.0` |
| [pyrefly](https://github.com/facebook/pyrefly) | `0.54.0` | `0.55.0` |



Updates `drf-spectacular-sidecar` from 2026.1.1 to 2026.3.1
- [Commits](https://github.com/tfranzel/drf-spectacular-sidecar/compare/2026.1.1...2026.3.1)

Updates `filelock` from 3.20.3 to 3.25.0
- [Release notes](https://github.com/tox-dev/py-filelock/releases)
- [Changelog](https://github.com/tox-dev/filelock/blob/main/docs/changelog.rst)
- [Commits](https://github.com/tox-dev/py-filelock/compare/3.20.3...3.25.0)

Updates `scikit-learn` from 1.7.2 to 1.8.0
- [Release notes](https://github.com/scikit-learn/scikit-learn/releases)
- [Commits](https://github.com/scikit-learn/scikit-learn/compare/1.7.2...1.8.0)

Updates `faker` from 40.5.1 to 40.8.0
- [Release notes](https://github.com/joke2k/faker/releases)
- [Changelog](https://github.com/joke2k/faker/blob/master/CHANGELOG.md)
- [Commits](https://github.com/joke2k/faker/compare/v40.5.1...v40.8.0)

Updates `pyrefly` from 0.54.0 to 0.55.0
- [Release notes](https://github.com/facebook/pyrefly/releases)
- [Commits](https://github.com/facebook/pyrefly/compare/0.54.0...0.55.0)

---
updated-dependencies:
- dependency-name: drf-spectacular-sidecar
  dependency-version: 2026.3.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: utilities-minor
- dependency-name: filelock
  dependency-version: 3.25.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: utilities-minor
- dependency-name: scikit-learn
  dependency-version: 1.8.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: utilities-minor
- dependency-name: faker
  dependency-version: 40.8.0
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: utilities-minor
- dependency-name: pyrefly
  dependency-version: 0.55.0
  dependency-type: direct:development
  update-type: version-update:semver-minor
  dependency-group: utilities-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Dont know what your problem is dependabot

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: shamoon <4887959+shamoon@users.noreply.github.com>
2026-03-12 12:30:42 -07:00
shamoon
d919c341b1 Fix: clear descendant selections in dropdown when parent toggled (#12326) 2026-03-12 11:57:35 -07:00
dependabot[bot]
e2947ccff2 Chore(deps): Bump the pre-commit-dependencies group with 4 updates (#12323)
* Chore(deps): Bump the pre-commit-dependencies group with 4 updates

---
updated-dependencies:
- dependency-name: https://github.com/codespell-project/codespell
  dependency-version: 2.4.2
  dependency-type: direct:production
  dependency-group: pre-commit-dependencies
- dependency-name: prettier
  dependency-version: 3.8.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: pre-commit-dependencies
- dependency-name: prettier-plugin-organize-imports
  dependency-version: 4.3.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: pre-commit-dependencies
- dependency-name: https://github.com/lovesegfault/beautysh
  dependency-version: 6.4.3
  dependency-type: direct:production
  dependency-group: pre-commit-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>

* Drop this, it seems more trouble than its worth

* Re-run prek with new prettier

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: shamoon <4887959+shamoon@users.noreply.github.com>
2026-03-12 16:29:57 +00:00
dependabot[bot]
61841a767b Chore(deps): Bump the actions group with 3 updates (#12322)
Bumps the actions group with 3 updates: [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action), [docker/login-action](https://github.com/docker/login-action) and [actions/setup-node](https://github.com/actions/setup-node).


Updates `docker/setup-buildx-action` from 3.12.0 to 4.0.0
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](https://github.com/docker/setup-buildx-action/compare/v3.12.0...v4.0.0)

Updates `docker/login-action` from 3.7.0 to 4.0.0
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](https://github.com/docker/login-action/compare/v3.7.0...v4.0.0)

Updates `actions/setup-node` from 6.2.0 to 6.3.0
- [Release notes](https://github.com/actions/setup-node/releases)
- [Commits](https://github.com/actions/setup-node/compare/v6.2.0...v6.3.0)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-version: 4.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: actions
- dependency-name: docker/login-action
  dependency-version: 4.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: actions
- dependency-name: actions/setup-node
  dependency-version: 6.3.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Trenton H <797416+stumpylog@users.noreply.github.com>
2026-03-12 09:04:22 -07:00
GitHub Actions
15db023caa Auto translate strings 2026-03-12 15:44:21 +00:00
shamoon
45b363659e Chore: mark document detail email action as deprecated (#12308) 2026-03-12 15:42:14 +00:00
Trenton H
7494161c95 Add dependency groups for pre-commit dependencies 2026-03-12 08:04:21 -07:00
Trenton H
5331312699 Remove cooldown for pre-commit updates (it's not supported)
Removed the default cooldown period for pre-commit updates.
2026-03-12 07:59:27 -07:00
Trenton H
b5a002b8ed Chore: Enable dependabot for pre-commit (#12305) 2026-03-12 07:52:43 -07:00
shamoon
dd8573242d Update api version for frontend dev server 2026-03-12 01:24:38 -07:00
Trenton H
86fa74c115 Fix: Postgres selection, DBENGINE and migrations (#12299) 2026-03-11 11:54:24 -07:00
shamoon
ba0a80a8ad Fix: prevent wrapping with larger amounts of tags on small cards, reset moreTags setting to correct count (#12302) 2026-03-11 07:39:32 -07:00
shamoon
b7b9e83f37 Fix (dev): include DatePipe in BulkEditor unit test 2026-03-11 00:01:06 -07:00
GitHub Actions
217b5df591 Auto translate strings 2026-03-10 23:47:25 +00:00
shamoon
3efc9a5733 Fix: use effective content for matching and suggestion content (#12293) 2026-03-10 23:45:56 +00:00
shamoon
e19f341974 Fix: Pin filelock to ~=3.20.3 (#12297) 2026-03-10 13:38:23 -07:00
GitHub Actions
2b4ea570ef Auto translate strings 2026-03-10 18:58:20 +00:00
shamoon
86573fc1a0 Chore: separate actions from bulk edit endpoint (#12286) 2026-03-10 18:55:36 +00:00
dependabot[bot]
3856ec19c0 docker(deps): bump astral-sh/uv (#12265)
Bumps [astral-sh/uv](https://github.com/astral-sh/uv) from 0.10.7-python3.12-trixie-slim to 0.10.8-python3.12-trixie-slim.
- [Release notes](https://github.com/astral-sh/uv/releases)
- [Changelog](https://github.com/astral-sh/uv/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/uv/compare/0.10.7...0.10.8)

---
updated-dependencies:
- dependency-name: astral-sh/uv
  dependency-version: 0.10.8-python3.12-trixie-slim
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-10 17:27:06 +00:00
shamoon
60319c6d37 Fix: prevent stale db filename during workflow actions (#12289) 2026-03-09 19:32:46 -07:00
GitHub Actions
1221e7f21c Auto translate strings 2026-03-09 22:37:56 +00:00
shamoon
3e32e90355 Breaking: drop support for api versions < 9 (#12284) 2026-03-09 22:36:22 +00:00
Trenton H
63cb75564e Chore: Remove some further old items (encryption passphrase and PNG handling) (#12290) 2026-03-09 22:04:51 +00:00
dependabot[bot]
6955d6c07f Chore(deps): Bump the utilities-patch group across 1 directory with 6 updates (#12291)
* Chore(deps): Bump the utilities-patch group across 1 directory with 6 updates

Bumps the utilities-patch group with 6 updates in the / directory:

| Package | From | To |
| --- | --- | --- |
| llama-index-embeddings-openai | `0.5.1` | `0.5.2` |
| llama-index-llms-openai | `0.6.21` | `0.6.26` |
| [python-dotenv](https://github.com/theskumar/python-dotenv) | `1.2.1` | `1.2.2` |
| [regex](https://github.com/mrabarnett/mrab-regex) | `2026.2.19` | `2026.2.28` |
| [prek](https://github.com/j178/prek) | `0.3.3` | `0.3.5` |
| [ruff](https://github.com/astral-sh/ruff) | `0.15.4` | `0.15.5` |



Updates `llama-index-embeddings-openai` from 0.5.1 to 0.5.2

Updates `llama-index-llms-openai` from 0.6.21 to 0.6.26

Updates `python-dotenv` from 1.2.1 to 1.2.2
- [Release notes](https://github.com/theskumar/python-dotenv/releases)
- [Changelog](https://github.com/theskumar/python-dotenv/blob/main/CHANGELOG.md)
- [Commits](https://github.com/theskumar/python-dotenv/compare/v1.2.1...v1.2.2)

Updates `regex` from 2026.2.19 to 2026.2.28
- [Changelog](https://github.com/mrabarnett/mrab-regex/blob/hg/changelog.txt)
- [Commits](https://github.com/mrabarnett/mrab-regex/compare/2026.2.19...2026.2.28)

Updates `prek` from 0.3.3 to 0.3.5
- [Release notes](https://github.com/j178/prek/releases)
- [Changelog](https://github.com/j178/prek/blob/master/CHANGELOG.md)
- [Commits](https://github.com/j178/prek/compare/v0.3.3...v0.3.5)

Updates `ruff` from 0.15.4 to 0.15.5
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.15.4...0.15.5)

---
updated-dependencies:
- dependency-name: llama-index-embeddings-openai
  dependency-version: 0.5.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: utilities-patch
- dependency-name: llama-index-llms-openai
  dependency-version: 0.6.26
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: utilities-patch
- dependency-name: python-dotenv
  dependency-version: 1.2.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: utilities-patch
- dependency-name: regex
  dependency-version: 2026.2.28
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: utilities-patch
- dependency-name: prek
  dependency-version: 0.3.5
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: utilities-patch
- dependency-name: ruff
  dependency-version: 0.15.5
  dependency-type: direct:development
  update-type: version-update:semver-patch
  dependency-group: utilities-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Update .pre-commit-config.yaml

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: shamoon <4887959+shamoon@users.noreply.github.com>
2026-03-09 19:47:19 +00:00
shamoon
d85ee29976 Fix ci gate base 2026-03-09 11:16:46 -07:00
GitHub Actions
0c7d56c5e7 Auto translate strings 2026-03-09 17:45:53 +00:00
Trenton H
0bcf904e3a Chore: Finish settings refactor (#12263) 2026-03-09 17:43:51 +00:00
Trenton H
bcc2f11152 Performance: Stream JSON during import for memory improvements (#12276)
* Perf: stream manifest parsing with ijson in document_importer

Replace bulk json.load of the full manifest (which materializes the
entire JSON array into memory) with incremental ijson streaming.
Eliminates self.manifest entirely — records are never all in memory
at once.

- Add ijson>=3.2 dependency
- New module-level iter_manifest_records() generator
- load_manifest_files() collects paths only; no parsing at load time
- check_manifest_validity() streams without accumulating records
- decrypt_secret_fields() streams each manifest to a .decrypted.json
  temp file record-by-record; temp files cleaned up after file copy
- _import_files_from_manifest() collects only document records (small
  fraction of manifest) for the tqdm progress bar

Measured on 200 docs + 200 CustomFieldInstances:
- Streaming validation: peak memory 3081 KiB -> 333 KiB (89% reduction)
- Stream-decrypt to file: peak memory 3081 KiB -> 549 KiB (82% reduction)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* Perf: slim dict in _import_files_from_manifest, discard fields

When collecting document records for the file-copy step, extract only
the 4 keys the loop actually uses (pk + 3 exported filename keys) and
discard the full fields dict (content, checksum, tags, etc.).

Peak memory for the document-record list: 939 KiB -> 375 KiB (60% reduction).
Wall time unchanged.
2026-03-09 10:20:48 -07:00
shamoon
e18b1fd99d Chore: use unified "gates" for ci tests and docs checks (#12277) 2026-03-09 17:02:34 +00:00
272 changed files with 21248 additions and 13536 deletions

View File

@@ -21,6 +21,7 @@ body:
- [The installation instructions](https://docs.paperless-ngx.com/setup/#installation).
- [Existing issues and discussions](https://github.com/paperless-ngx/paperless-ngx/search?q=&type=issues).
- Disable any custom container initialization scripts, if using
- Remove any third-party parser plugins — issues caused by or requiring changes to a third-party plugin will be closed without investigation.
If you encounter issues while installing or configuring Paperless-ngx, please post in the ["Support" section of the discussions](https://github.com/paperless-ngx/paperless-ngx/discussions/new?category=support).
- type: textarea
@@ -120,5 +121,7 @@ body:
required: true
- label: I have already searched for relevant existing issues and discussions before opening this report.
required: true
- label: I have reproduced this issue with all third-party parser plugins removed. I understand that issues caused by third-party plugins will be closed without investigation.
required: true
- label: I have updated the title field above with a concise description.
required: true

View File

@@ -12,6 +12,8 @@ updates:
open-pull-requests-limit: 10
schedule:
interval: "monthly"
cooldown:
default-days: 7
labels:
- "frontend"
- "dependencies"
@@ -36,7 +38,9 @@ updates:
directory: "/"
# Check for updates once a week
schedule:
interval: "weekly"
interval: "monthly"
cooldown:
default-days: 7
labels:
- "backend"
- "dependencies"
@@ -97,6 +101,8 @@ updates:
schedule:
# Check for updates to GitHub Actions every month
interval: "monthly"
cooldown:
default-days: 7
labels:
- "ci-cd"
- "dependencies"
@@ -112,7 +118,9 @@ updates:
- "/"
- "/.devcontainer/"
schedule:
interval: "weekly"
interval: "monthly"
cooldown:
default-days: 7
open-pull-requests-limit: 5
labels:
- "dependencies"
@@ -123,7 +131,9 @@ updates:
- package-ecosystem: "docker-compose"
directory: "/docker/compose/"
schedule:
interval: "weekly"
interval: "monthly"
cooldown:
default-days: 7
open-pull-requests-limit: 5
labels:
- "dependencies"
@@ -147,3 +157,14 @@ updates:
postgres:
patterns:
- "docker.io/library/postgres*"
greenmail:
patterns:
- "docker.io/greenmail*"
- package-ecosystem: "pre-commit" # See documentation for possible values
directory: "/" # Location of package manifests
schedule:
interval: "monthly"
groups:
pre-commit-dependencies:
patterns:
- "*"

View File

@@ -3,21 +3,9 @@ on:
push:
branches-ignore:
- 'translations**'
paths:
- 'src/**'
- 'pyproject.toml'
- 'uv.lock'
- 'docker/compose/docker-compose.ci-test.yml'
- '.github/workflows/ci-backend.yml'
pull_request:
branches-ignore:
- 'translations**'
paths:
- 'src/**'
- 'pyproject.toml'
- 'uv.lock'
- 'docker/compose/docker-compose.ci-test.yml'
- '.github/workflows/ci-backend.yml'
workflow_dispatch:
concurrency:
group: backend-${{ github.event.pull_request.number || github.ref }}
@@ -26,7 +14,55 @@ env:
DEFAULT_UV_VERSION: "0.10.x"
NLTK_DATA: "/usr/share/nltk_data"
jobs:
changes:
name: Detect Backend Changes
runs-on: ubuntu-slim
outputs:
backend_changed: ${{ steps.force.outputs.run_all == 'true' || steps.filter.outputs.backend == 'true' }}
steps:
- name: Checkout
uses: actions/checkout@v6.0.2
with:
fetch-depth: 0
- name: Decide run mode
id: force
run: |
if [[ "${{ github.event_name }}" == "workflow_dispatch" ]]; then
echo "run_all=true" >> "$GITHUB_OUTPUT"
elif [[ "${{ github.event_name }}" == "push" && ( "${{ github.ref_name }}" == "main" || "${{ github.ref_name }}" == "dev" ) ]]; then
echo "run_all=true" >> "$GITHUB_OUTPUT"
else
echo "run_all=false" >> "$GITHUB_OUTPUT"
fi
- name: Set diff range
id: range
if: steps.force.outputs.run_all != 'true'
run: |
if [[ "${{ github.event_name }}" == "pull_request" ]]; then
echo "base=${{ github.event.pull_request.base.sha }}" >> "$GITHUB_OUTPUT"
elif [[ "${{ github.event.created }}" == "true" ]]; then
echo "base=${{ github.event.repository.default_branch }}" >> "$GITHUB_OUTPUT"
else
echo "base=${{ github.event.before }}" >> "$GITHUB_OUTPUT"
fi
echo "ref=${{ github.sha }}" >> "$GITHUB_OUTPUT"
- name: Detect changes
id: filter
if: steps.force.outputs.run_all != 'true'
uses: dorny/paths-filter@v3.0.2
with:
base: ${{ steps.range.outputs.base }}
ref: ${{ steps.range.outputs.ref }}
filters: |
backend:
- 'src/**'
- 'pyproject.toml'
- 'uv.lock'
- 'docker/compose/docker-compose.ci-test.yml'
- '.github/workflows/ci-backend.yml'
test:
needs: changes
if: needs.changes.outputs.backend_changed == 'true'
name: "Python ${{ matrix.python-version }}"
runs-on: ubuntu-24.04
strategy:
@@ -100,6 +136,8 @@ jobs:
docker compose --file docker/compose/docker-compose.ci-test.yml logs
docker compose --file docker/compose/docker-compose.ci-test.yml down
typing:
needs: changes
if: needs.changes.outputs.backend_changed == 'true'
name: Check project typing
runs-on: ubuntu-24.04
env:
@@ -150,3 +188,27 @@ jobs:
--show-error-codes \
--warn-unused-configs \
src/ | uv run mypy-baseline filter
gate:
name: Backend CI Gate
needs: [changes, test, typing]
if: always()
runs-on: ubuntu-slim
steps:
- name: Check gate
run: |
if [[ "${{ needs.changes.outputs.backend_changed }}" != "true" ]]; then
echo "No backend-relevant changes detected."
exit 0
fi
if [[ "${{ needs.test.result }}" != "success" ]]; then
echo "::error::Backend test job result: ${{ needs.test.result }}"
exit 1
fi
if [[ "${{ needs.typing.result }}" != "success" ]]; then
echo "::error::Backend typing job result: ${{ needs.typing.result }}"
exit 1
fi
echo "Backend checks passed."

View File

@@ -104,9 +104,9 @@ jobs:
echo "repository=${repo_name}"
echo "name=${repo_name}" >> $GITHUB_OUTPUT
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3.12.0
uses: docker/setup-buildx-action@v4.0.0
- name: Login to GitHub Container Registry
uses: docker/login-action@v3.7.0
uses: docker/login-action@v4.0.0
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
@@ -119,7 +119,7 @@ jobs:
sudo rm -rf "$AGENT_TOOLSDIRECTORY"
- name: Docker metadata
id: docker-meta
uses: docker/metadata-action@v5.10.0
uses: docker/metadata-action@v6.0.0
with:
images: |
${{ env.REGISTRY }}/${{ steps.repo.outputs.name }}
@@ -130,7 +130,7 @@ jobs:
type=semver,pattern={{major}}.{{minor}}
- name: Build and push by digest
id: build
uses: docker/build-push-action@v6.19.2
uses: docker/build-push-action@v7.0.0
with:
context: .
file: ./Dockerfile
@@ -179,29 +179,29 @@ jobs:
echo "Downloaded digests:"
ls -la /tmp/digests/
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3.12.0
uses: docker/setup-buildx-action@v4.0.0
- name: Login to GitHub Container Registry
uses: docker/login-action@v3.7.0
uses: docker/login-action@v4.0.0
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Docker Hub
if: needs.build-arch.outputs.push-external == 'true'
uses: docker/login-action@v3.7.0
uses: docker/login-action@v4.0.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to Quay.io
if: needs.build-arch.outputs.push-external == 'true'
uses: docker/login-action@v3.7.0
uses: docker/login-action@v4.0.0
with:
registry: quay.io
username: ${{ secrets.QUAY_USERNAME }}
password: ${{ secrets.QUAY_ROBOT_TOKEN }}
- name: Docker metadata
id: docker-meta
uses: docker/metadata-action@v5.10.0
uses: docker/metadata-action@v6.0.0
with:
images: |
${{ env.REGISTRY }}/${{ needs.build-arch.outputs.repository }}

View File

@@ -1,22 +1,9 @@
name: Documentation
on:
push:
branches:
- main
- dev
paths:
- 'docs/**'
- 'zensical.toml'
- 'pyproject.toml'
- 'uv.lock'
- '.github/workflows/ci-docs.yml'
branches-ignore:
- 'translations**'
pull_request:
paths:
- 'docs/**'
- 'zensical.toml'
- 'pyproject.toml'
- 'uv.lock'
- '.github/workflows/ci-docs.yml'
workflow_dispatch:
concurrency:
group: docs-${{ github.event.pull_request.number || github.ref }}
@@ -29,7 +16,55 @@ env:
DEFAULT_UV_VERSION: "0.10.x"
DEFAULT_PYTHON_VERSION: "3.12"
jobs:
changes:
name: Detect Docs Changes
runs-on: ubuntu-slim
outputs:
docs_changed: ${{ steps.force.outputs.run_all == 'true' || steps.filter.outputs.docs == 'true' }}
steps:
- name: Checkout
uses: actions/checkout@v6.0.2
with:
fetch-depth: 0
- name: Decide run mode
id: force
run: |
if [[ "${{ github.event_name }}" == "workflow_dispatch" ]]; then
echo "run_all=true" >> "$GITHUB_OUTPUT"
elif [[ "${{ github.event_name }}" == "push" && ( "${{ github.ref_name }}" == "main" || "${{ github.ref_name }}" == "dev" ) ]]; then
echo "run_all=true" >> "$GITHUB_OUTPUT"
else
echo "run_all=false" >> "$GITHUB_OUTPUT"
fi
- name: Set diff range
id: range
if: steps.force.outputs.run_all != 'true'
run: |
if [[ "${{ github.event_name }}" == "pull_request" ]]; then
echo "base=${{ github.event.pull_request.base.sha }}" >> "$GITHUB_OUTPUT"
elif [[ "${{ github.event.created }}" == "true" ]]; then
echo "base=${{ github.event.repository.default_branch }}" >> "$GITHUB_OUTPUT"
else
echo "base=${{ github.event.before }}" >> "$GITHUB_OUTPUT"
fi
echo "ref=${{ github.sha }}" >> "$GITHUB_OUTPUT"
- name: Detect changes
id: filter
if: steps.force.outputs.run_all != 'true'
uses: dorny/paths-filter@v3.0.2
with:
base: ${{ steps.range.outputs.base }}
ref: ${{ steps.range.outputs.ref }}
filters: |
docs:
- 'docs/**'
- 'zensical.toml'
- 'pyproject.toml'
- 'uv.lock'
- '.github/workflows/ci-docs.yml'
build:
needs: changes
if: needs.changes.outputs.docs_changed == 'true'
name: Build Documentation
runs-on: ubuntu-24.04
steps:
@@ -64,8 +99,8 @@ jobs:
name: github-pages-${{ github.run_id }}-${{ github.run_attempt }}
deploy:
name: Deploy Documentation
needs: build
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
needs: [changes, build]
if: github.event_name == 'push' && github.ref == 'refs/heads/main' && needs.changes.outputs.docs_changed == 'true'
runs-on: ubuntu-24.04
environment:
name: github-pages
@@ -76,3 +111,22 @@ jobs:
id: deployment
with:
artifact_name: github-pages-${{ github.run_id }}-${{ github.run_attempt }}
gate:
name: Docs CI Gate
needs: [changes, build]
if: always()
runs-on: ubuntu-slim
steps:
- name: Check gate
run: |
if [[ "${{ needs.changes.outputs.docs_changed }}" != "true" ]]; then
echo "No docs-relevant changes detected."
exit 0
fi
if [[ "${{ needs.build.result }}" != "success" ]]; then
echo "::error::Docs build job result: ${{ needs.build.result }}"
exit 1
fi
echo "Docs checks passed."

View File

@@ -3,21 +3,60 @@ on:
push:
branches-ignore:
- 'translations**'
paths:
- 'src-ui/**'
- '.github/workflows/ci-frontend.yml'
pull_request:
branches-ignore:
- 'translations**'
paths:
- 'src-ui/**'
- '.github/workflows/ci-frontend.yml'
workflow_dispatch:
concurrency:
group: frontend-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
changes:
name: Detect Frontend Changes
runs-on: ubuntu-slim
outputs:
frontend_changed: ${{ steps.force.outputs.run_all == 'true' || steps.filter.outputs.frontend == 'true' }}
steps:
- name: Checkout
uses: actions/checkout@v6.0.2
with:
fetch-depth: 0
- name: Decide run mode
id: force
run: |
if [[ "${{ github.event_name }}" == "workflow_dispatch" ]]; then
echo "run_all=true" >> "$GITHUB_OUTPUT"
elif [[ "${{ github.event_name }}" == "push" && ( "${{ github.ref_name }}" == "main" || "${{ github.ref_name }}" == "dev" ) ]]; then
echo "run_all=true" >> "$GITHUB_OUTPUT"
else
echo "run_all=false" >> "$GITHUB_OUTPUT"
fi
- name: Set diff range
id: range
if: steps.force.outputs.run_all != 'true'
run: |
if [[ "${{ github.event_name }}" == "pull_request" ]]; then
echo "base=${{ github.event.pull_request.base.sha }}" >> "$GITHUB_OUTPUT"
elif [[ "${{ github.event.created }}" == "true" ]]; then
echo "base=${{ github.event.repository.default_branch }}" >> "$GITHUB_OUTPUT"
else
echo "base=${{ github.event.before }}" >> "$GITHUB_OUTPUT"
fi
echo "ref=${{ github.sha }}" >> "$GITHUB_OUTPUT"
- name: Detect changes
id: filter
if: steps.force.outputs.run_all != 'true'
uses: dorny/paths-filter@v3.0.2
with:
base: ${{ steps.range.outputs.base }}
ref: ${{ steps.range.outputs.ref }}
filters: |
frontend:
- 'src-ui/**'
- '.github/workflows/ci-frontend.yml'
install-dependencies:
needs: changes
if: needs.changes.outputs.frontend_changed == 'true'
name: Install Dependencies
runs-on: ubuntu-24.04
steps:
@@ -28,7 +67,7 @@ jobs:
with:
version: 10
- name: Use Node.js 24
uses: actions/setup-node@v6.2.0
uses: actions/setup-node@v6.3.0
with:
node-version: 24.x
cache: 'pnpm'
@@ -45,7 +84,8 @@ jobs:
run: cd src-ui && pnpm install
lint:
name: Lint
needs: install-dependencies
needs: [changes, install-dependencies]
if: needs.changes.outputs.frontend_changed == 'true'
runs-on: ubuntu-24.04
steps:
- name: Checkout
@@ -55,7 +95,7 @@ jobs:
with:
version: 10
- name: Use Node.js 24
uses: actions/setup-node@v6.2.0
uses: actions/setup-node@v6.3.0
with:
node-version: 24.x
cache: 'pnpm'
@@ -73,7 +113,8 @@ jobs:
run: cd src-ui && pnpm run lint
unit-tests:
name: "Unit Tests (${{ matrix.shard-index }}/${{ matrix.shard-count }})"
needs: install-dependencies
needs: [changes, install-dependencies]
if: needs.changes.outputs.frontend_changed == 'true'
runs-on: ubuntu-24.04
strategy:
fail-fast: false
@@ -89,7 +130,7 @@ jobs:
with:
version: 10
- name: Use Node.js 24
uses: actions/setup-node@v6.2.0
uses: actions/setup-node@v6.3.0
with:
node-version: 24.x
cache: 'pnpm'
@@ -119,7 +160,8 @@ jobs:
directory: src-ui/coverage/
e2e-tests:
name: "E2E Tests (${{ matrix.shard-index }}/${{ matrix.shard-count }})"
needs: install-dependencies
needs: [changes, install-dependencies]
if: needs.changes.outputs.frontend_changed == 'true'
runs-on: ubuntu-24.04
container: mcr.microsoft.com/playwright:v1.58.2-noble
env:
@@ -139,7 +181,7 @@ jobs:
with:
version: 10
- name: Use Node.js 24
uses: actions/setup-node@v6.2.0
uses: actions/setup-node@v6.3.0
with:
node-version: 24.x
cache: 'pnpm'
@@ -159,7 +201,8 @@ jobs:
run: cd src-ui && pnpm exec playwright test --shard ${{ matrix.shard-index }}/${{ matrix.shard-count }}
bundle-analysis:
name: Bundle Analysis
needs: [unit-tests, e2e-tests]
needs: [changes, unit-tests, e2e-tests]
if: needs.changes.outputs.frontend_changed == 'true'
runs-on: ubuntu-24.04
steps:
- name: Checkout
@@ -171,7 +214,7 @@ jobs:
with:
version: 10
- name: Use Node.js 24
uses: actions/setup-node@v6.2.0
uses: actions/setup-node@v6.3.0
with:
node-version: 24.x
cache: 'pnpm'
@@ -189,3 +232,42 @@ jobs:
env:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
run: cd src-ui && pnpm run build --configuration=production
gate:
name: Frontend CI Gate
needs: [changes, install-dependencies, lint, unit-tests, e2e-tests, bundle-analysis]
if: always()
runs-on: ubuntu-slim
steps:
- name: Check gate
run: |
if [[ "${{ needs.changes.outputs.frontend_changed }}" != "true" ]]; then
echo "No frontend-relevant changes detected."
exit 0
fi
if [[ "${{ needs['install-dependencies'].result }}" != "success" ]]; then
echo "::error::Frontend install job result: ${{ needs['install-dependencies'].result }}"
exit 1
fi
if [[ "${{ needs.lint.result }}" != "success" ]]; then
echo "::error::Frontend lint job result: ${{ needs.lint.result }}"
exit 1
fi
if [[ "${{ needs['unit-tests'].result }}" != "success" ]]; then
echo "::error::Frontend unit-tests job result: ${{ needs['unit-tests'].result }}"
exit 1
fi
if [[ "${{ needs['e2e-tests'].result }}" != "success" ]]; then
echo "::error::Frontend e2e-tests job result: ${{ needs['e2e-tests'].result }}"
exit 1
fi
if [[ "${{ needs['bundle-analysis'].result }}" != "success" ]]; then
echo "::error::Frontend bundle-analysis job result: ${{ needs['bundle-analysis'].result }}"
exit 1
fi
echo "Frontend checks passed."

View File

@@ -35,7 +35,7 @@ jobs:
with:
version: 10
- name: Use Node.js 24
uses: actions/setup-node@v6.2.0
uses: actions/setup-node@v6.3.0
with:
node-version: 24.x
cache: 'pnpm'

View File

@@ -40,7 +40,7 @@ jobs:
with:
version: 10
- name: Use Node.js 24
uses: actions/setup-node@v6.2.0
uses: actions/setup-node@v6.3.0
with:
node-version: 24.x
cache: 'pnpm'

View File

@@ -2437,17 +2437,3 @@ src/paperless_tesseract/tests/test_parser_custom_settings.py:0: error: Item "Non
src/paperless_tesseract/tests/test_parser_custom_settings.py:0: error: Item "None" of "ApplicationConfiguration | None" has no attribute "unpaper_clean" [union-attr]
src/paperless_tesseract/tests/test_parser_custom_settings.py:0: error: Item "None" of "ApplicationConfiguration | None" has no attribute "unpaper_clean" [union-attr]
src/paperless_tesseract/tests/test_parser_custom_settings.py:0: error: Item "None" of "ApplicationConfiguration | None" has no attribute "user_args" [union-attr]
src/paperless_text/parsers.py:0: error: Function is missing a type annotation for one or more arguments [no-untyped-def]
src/paperless_text/parsers.py:0: error: Function is missing a type annotation for one or more arguments [no-untyped-def]
src/paperless_text/parsers.py:0: error: Incompatible types in assignment (expression has type "str", variable has type "None") [assignment]
src/paperless_text/signals.py:0: error: Function is missing a type annotation [no-untyped-def]
src/paperless_text/signals.py:0: error: Function is missing a type annotation [no-untyped-def]
src/paperless_tika/parsers.py:0: error: Argument 1 to "make_thumbnail_from_pdf" has incompatible type "None"; expected "Path" [arg-type]
src/paperless_tika/parsers.py:0: error: Function is missing a return type annotation [no-untyped-def]
src/paperless_tika/parsers.py:0: error: Function is missing a type annotation [no-untyped-def]
src/paperless_tika/parsers.py:0: error: Function is missing a type annotation [no-untyped-def]
src/paperless_tika/parsers.py:0: error: Function is missing a type annotation for one or more arguments [no-untyped-def]
src/paperless_tika/parsers.py:0: error: Function is missing a type annotation for one or more arguments [no-untyped-def]
src/paperless_tika/parsers.py:0: error: Incompatible types in assignment (expression has type "str | None", variable has type "None") [assignment]
src/paperless_tika/signals.py:0: error: Function is missing a type annotation [no-untyped-def]
src/paperless_tika/signals.py:0: error: Function is missing a type annotation [no-untyped-def]

View File

@@ -29,7 +29,7 @@ repos:
- id: check-case-conflict
- id: detect-private-key
- repo: https://github.com/codespell-project/codespell
rev: v2.4.1
rev: v2.4.2
hooks:
- id: codespell
additional_dependencies: [tomli]
@@ -46,11 +46,11 @@ repos:
- ts
- markdown
additional_dependencies:
- prettier@3.3.3
- 'prettier-plugin-organize-imports@4.1.0'
- prettier@3.8.1
- 'prettier-plugin-organize-imports@4.3.0'
# Python hooks
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.15.0
rev: v0.15.6
hooks:
- id: ruff-check
- id: ruff-format
@@ -65,7 +65,7 @@ repos:
- id: hadolint
# Shell script hooks
- repo: https://github.com/lovesegfault/beautysh
rev: v6.4.2
rev: v6.4.3
hooks:
- id: beautysh
types: [file]

View File

@@ -5,14 +5,6 @@ const config = {
singleQuote: true,
// https://prettier.io/docs/en/options.html#trailing-commas
trailingComma: 'es5',
overrides: [
{
files: ['docs/*.md'],
options: {
tabWidth: 4,
},
},
],
plugins: [require('prettier-plugin-organize-imports')],
}

View File

@@ -30,7 +30,7 @@ RUN set -eux \
# Purpose: Installs s6-overlay and rootfs
# Comments:
# - Don't leave anything extra in here either
FROM ghcr.io/astral-sh/uv:0.10.7-python3.12-trixie-slim AS s6-overlay-base
FROM ghcr.io/astral-sh/uv:0.10.9-python3.12-trixie-slim AS s6-overlay-base
WORKDIR /usr/src/s6

View File

@@ -18,13 +18,13 @@ services:
- "--log-level=warn"
- "--log-format=text"
tika:
image: docker.io/apache/tika:latest
image: docker.io/apache/tika:3.2.3.0
hostname: tika
container_name: tika
network_mode: host
restart: unless-stopped
greenmail:
image: greenmail/standalone:2.1.8
image: docker.io/greenmail/standalone:2.1.8
hostname: greenmail
container_name: greenmail
environment:

View File

@@ -56,6 +56,7 @@ services:
environment:
PAPERLESS_REDIS: redis://broker:6379
PAPERLESS_DBHOST: db
PAPERLESS_DBENGINE: postgres
env_file:
- stack.env
volumes:

View File

@@ -62,6 +62,7 @@ services:
environment:
PAPERLESS_REDIS: redis://broker:6379
PAPERLESS_DBHOST: db
PAPERLESS_DBENGINE: postgresql
PAPERLESS_TIKA_ENABLED: 1
PAPERLESS_TIKA_GOTENBERG_ENDPOINT: http://gotenberg:3000
PAPERLESS_TIKA_ENDPOINT: http://tika:9998

View File

@@ -56,6 +56,7 @@ services:
environment:
PAPERLESS_REDIS: redis://broker:6379
PAPERLESS_DBHOST: db
PAPERLESS_DBENGINE: postgresql
volumes:
data:
media:

View File

@@ -51,6 +51,7 @@ services:
env_file: docker-compose.env
environment:
PAPERLESS_REDIS: redis://broker:6379
PAPERLESS_DBENGINE: sqlite
PAPERLESS_TIKA_ENABLED: 1
PAPERLESS_TIKA_GOTENBERG_ENDPOINT: http://gotenberg:3000
PAPERLESS_TIKA_ENDPOINT: http://tika:9998

View File

@@ -42,6 +42,7 @@ services:
env_file: docker-compose.env
environment:
PAPERLESS_REDIS: redis://broker:6379
PAPERLESS_DBENGINE: sqlite
volumes:
data:
media:

View File

@@ -10,8 +10,10 @@ cd "${PAPERLESS_SRC_DIR}"
# The whole migrate, with flock, needs to run as the right user
if [[ -n "${USER_IS_NON_ROOT}" ]]; then
python3 manage.py check --tag compatibility paperless || exit 1
exec s6-setlock -n "${data_dir}/migration_lock" python3 manage.py migrate --skip-checks --no-input
else
s6-setuidgid paperless python3 manage.py check --tag compatibility paperless || exit 1
exec s6-setuidgid paperless \
s6-setlock -n "${data_dir}/migration_lock" \
python3 manage.py migrate --skip-checks --no-input

View File

@@ -2,6 +2,17 @@
# shellcheck shell=bash
declare -r log_prefix="[init-user]"
# When the container is started as a non-root user (e.g. via `user: 999:999`
# in Docker Compose), usermod/groupmod require root and are meaningless.
# USERMAP_* variables only apply to the root-started path.
if [[ -n "${USER_IS_NON_ROOT}" ]]; then
if [[ -n "${USERMAP_UID}" || -n "${USERMAP_GID}" ]]; then
echo "${log_prefix} WARNING: USERMAP_UID/USERMAP_GID are set but have no effect when the container is started as a non-root user"
fi
echo "${log_prefix} Running as non-root user ($(id --user):$(id --group)), skipping UID/GID remapping"
exit 0
fi
declare -r usermap_original_uid=$(id -u paperless)
declare -r usermap_original_gid=$(id -g paperless)
declare -r usermap_new_uid=${USERMAP_UID:-$usermap_original_uid}

View File

@@ -10,16 +10,16 @@ consuming documents at that time.
Options available to any installation of paperless:
- Use the [document exporter](#exporter). The document exporter exports all your documents,
thumbnails, metadata, and database contents to a specific folder. You may import your
documents and settings into a fresh instance of paperless again or store your
documents in another DMS with this export.
- Use the [document exporter](#exporter). The document exporter exports all your documents,
thumbnails, metadata, and database contents to a specific folder. You may import your
documents and settings into a fresh instance of paperless again or store your
documents in another DMS with this export.
The document exporter is also able to update an already existing
export. Therefore, incremental backups with `rsync` are entirely
possible.
The document exporter is also able to update an already existing
export. Therefore, incremental backups with `rsync` are entirely
possible.
The exporter does not include API tokens and they will need to be re-generated after importing.
The exporter does not include API tokens and they will need to be re-generated after importing.
!!! caution
@@ -29,28 +29,27 @@ Options available to any installation of paperless:
Options available to docker installations:
- Backup the docker volumes. These usually reside within
`/var/lib/docker/volumes` on the host and you need to be root in
order to access them.
- Backup the docker volumes. These usually reside within
`/var/lib/docker/volumes` on the host and you need to be root in
order to access them.
Paperless uses 4 volumes:
- `paperless_media`: This is where your documents are stored.
- `paperless_data`: This is where auxiliary data is stored. This
folder also contains the SQLite database, if you use it.
- `paperless_pgdata`: Exists only if you use PostgreSQL and
contains the database.
- `paperless_dbdata`: Exists only if you use MariaDB and contains
the database.
Paperless uses 4 volumes:
- `paperless_media`: This is where your documents are stored.
- `paperless_data`: This is where auxiliary data is stored. This
folder also contains the SQLite database, if you use it.
- `paperless_pgdata`: Exists only if you use PostgreSQL and
contains the database.
- `paperless_dbdata`: Exists only if you use MariaDB and contains
the database.
Options available to bare-metal and non-docker installations:
- Backup the entire paperless folder. This ensures that if your
paperless instance crashes at some point or your disk fails, you can
simply copy the folder back into place and it works.
- Backup the entire paperless folder. This ensures that if your
paperless instance crashes at some point or your disk fails, you can
simply copy the folder back into place and it works.
When using PostgreSQL or MariaDB, you'll also have to backup the
database.
When using PostgreSQL or MariaDB, you'll also have to backup the
database.
### Restoring {#migrating-restoring}
@@ -509,19 +508,19 @@ collection for issues.
The issues detected by the sanity checker are as follows:
- Missing original files.
- Missing archive files.
- Inaccessible original files due to improper permissions.
- Inaccessible archive files due to improper permissions.
- Corrupted original documents by comparing their checksum against
what is stored in the database.
- Corrupted archive documents by comparing their checksum against what
is stored in the database.
- Missing thumbnails.
- Inaccessible thumbnails due to improper permissions.
- Documents without any content (warning).
- Orphaned files in the media directory (warning). These are files
that are not referenced by any document in paperless.
- Missing original files.
- Missing archive files.
- Inaccessible original files due to improper permissions.
- Inaccessible archive files due to improper permissions.
- Corrupted original documents by comparing their checksum against
what is stored in the database.
- Corrupted archive documents by comparing their checksum against what
is stored in the database.
- Missing thumbnails.
- Inaccessible thumbnails due to improper permissions.
- Documents without any content (warning).
- Orphaned files in the media directory (warning). These are files
that are not referenced by any document in paperless.
```
document_sanity_checker

View File

@@ -25,20 +25,20 @@ documents.
The following algorithms are available:
- **None:** No matching will be performed.
- **Any:** Looks for any occurrence of any word provided in match in
the PDF. If you define the match as `Bank1 Bank2`, it will match
documents containing either of these terms.
- **All:** Requires that every word provided appears in the PDF,
albeit not in the order provided.
- **Exact:** Matches only if the match appears exactly as provided
(i.e. preserve ordering) in the PDF.
- **Regular expression:** Parses the match as a regular expression and
tries to find a match within the document.
- **Fuzzy match:** Uses a partial matching based on locating the tag text
inside the document, using a [partial ratio](https://rapidfuzz.github.io/RapidFuzz/Usage/fuzz.html#partial-ratio)
- **Auto:** Tries to automatically match new documents. This does not
require you to set a match. See the [notes below](#automatic-matching).
- **None:** No matching will be performed.
- **Any:** Looks for any occurrence of any word provided in match in
the PDF. If you define the match as `Bank1 Bank2`, it will match
documents containing either of these terms.
- **All:** Requires that every word provided appears in the PDF,
albeit not in the order provided.
- **Exact:** Matches only if the match appears exactly as provided
(i.e. preserve ordering) in the PDF.
- **Regular expression:** Parses the match as a regular expression and
tries to find a match within the document.
- **Fuzzy match:** Uses a partial matching based on locating the tag text
inside the document, using a [partial ratio](https://rapidfuzz.github.io/RapidFuzz/Usage/fuzz.html#partial-ratio)
- **Auto:** Tries to automatically match new documents. This does not
require you to set a match. See the [notes below](#automatic-matching).
When using the _any_ or _all_ matching algorithms, you can search for
terms that consist of multiple words by enclosing them in double quotes.
@@ -69,33 +69,33 @@ Paperless tries to hide much of the involved complexity with this
approach. However, there are a couple caveats you need to keep in mind
when using this feature:
- Changes to your documents are not immediately reflected by the
matching algorithm. The neural network needs to be _trained_ on your
documents after changes. Paperless periodically (default: once each
hour) checks for changes and does this automatically for you.
- The Auto matching algorithm only takes documents into account which
are NOT placed in your inbox (i.e. have any inbox tags assigned to
them). This ensures that the neural network only learns from
documents which you have correctly tagged before.
- The matching algorithm can only work if there is a correlation
between the tag, correspondent, document type, or storage path and
the document itself. Your bank statements usually contain your bank
account number and the name of the bank, so this works reasonably
well, However, tags such as "TODO" cannot be automatically
assigned.
- The matching algorithm needs a reasonable number of documents to
identify when to assign tags, correspondents, storage paths, and
types. If one out of a thousand documents has the correspondent
"Very obscure web shop I bought something five years ago", it will
probably not assign this correspondent automatically if you buy
something from them again. The more documents, the better.
- Paperless also needs a reasonable amount of negative examples to
decide when not to assign a certain tag, correspondent, document
type, or storage path. This will usually be the case as you start
filling up paperless with documents. Example: If all your documents
are either from "Webshop" or "Bank", paperless will assign one
of these correspondents to ANY new document, if both are set to
automatic matching.
- Changes to your documents are not immediately reflected by the
matching algorithm. The neural network needs to be _trained_ on your
documents after changes. Paperless periodically (default: once each
hour) checks for changes and does this automatically for you.
- The Auto matching algorithm only takes documents into account which
are NOT placed in your inbox (i.e. have any inbox tags assigned to
them). This ensures that the neural network only learns from
documents which you have correctly tagged before.
- The matching algorithm can only work if there is a correlation
between the tag, correspondent, document type, or storage path and
the document itself. Your bank statements usually contain your bank
account number and the name of the bank, so this works reasonably
well, However, tags such as "TODO" cannot be automatically
assigned.
- The matching algorithm needs a reasonable number of documents to
identify when to assign tags, correspondents, storage paths, and
types. If one out of a thousand documents has the correspondent
"Very obscure web shop I bought something five years ago", it will
probably not assign this correspondent automatically if you buy
something from them again. The more documents, the better.
- Paperless also needs a reasonable amount of negative examples to
decide when not to assign a certain tag, correspondent, document
type, or storage path. This will usually be the case as you start
filling up paperless with documents. Example: If all your documents
are either from "Webshop" or "Bank", paperless will assign one
of these correspondents to ANY new document, if both are set to
automatic matching.
## Hooking into the consumption process {#consume-hooks}
@@ -243,12 +243,12 @@ webserver:
Troubleshooting:
- Monitor the Docker Compose log
`cd ~/paperless-ngx; docker compose logs -f`
- Check your script's permission e.g. in case of permission error
`sudo chmod 755 post-consumption-example.sh`
- Pipe your scripts's output to a log file e.g.
`echo "${DOCUMENT_ID}" | tee --append /usr/src/paperless/scripts/post-consumption-example.log`
- Monitor the Docker Compose log
`cd ~/paperless-ngx; docker compose logs -f`
- Check your script's permission e.g. in case of permission error
`sudo chmod 755 post-consumption-example.sh`
- Pipe your scripts's output to a log file e.g.
`echo "${DOCUMENT_ID}" | tee --append /usr/src/paperless/scripts/post-consumption-example.log`
## File name handling {#file-name-handling}
@@ -307,35 +307,35 @@ will create a directory structure as follows:
Paperless provides the following variables for use within filenames:
- `{{ asn }}`: The archive serial number of the document, or "none".
- `{{ correspondent }}`: The name of the correspondent, or "none".
- `{{ document_type }}`: The name of the document type, or "none".
- `{{ tag_list }}`: A comma separated list of all tags assigned to the
document.
- `{{ title }}`: The title of the document.
- `{{ created }}`: The full date (ISO 8601 format, e.g. `2024-03-14`) the document was created.
- `{{ created_year }}`: Year created only, formatted as the year with
century.
- `{{ created_year_short }}`: Year created only, formatted as the year
without century, zero padded.
- `{{ created_month }}`: Month created only (number 01-12).
- `{{ created_month_name }}`: Month created name, as per locale
- `{{ created_month_name_short }}`: Month created abbreviated name, as per
locale
- `{{ created_day }}`: Day created only (number 01-31).
- `{{ added }}`: The full date (ISO format) the document was added to
paperless.
- `{{ added_year }}`: Year added only.
- `{{ added_year_short }}`: Year added only, formatted as the year without
century, zero padded.
- `{{ added_month }}`: Month added only (number 01-12).
- `{{ added_month_name }}`: Month added name, as per locale
- `{{ added_month_name_short }}`: Month added abbreviated name, as per
locale
- `{{ added_day }}`: Day added only (number 01-31).
- `{{ owner_username }}`: Username of document owner, if any, or "none"
- `{{ original_name }}`: Document original filename, minus the extension, if any, or "none"
- `{{ doc_pk }}`: The paperless identifier (primary key) for the document.
- `{{ asn }}`: The archive serial number of the document, or "none".
- `{{ correspondent }}`: The name of the correspondent, or "none".
- `{{ document_type }}`: The name of the document type, or "none".
- `{{ tag_list }}`: A comma separated list of all tags assigned to the
document.
- `{{ title }}`: The title of the document.
- `{{ created }}`: The full date (ISO 8601 format, e.g. `2024-03-14`) the document was created.
- `{{ created_year }}`: Year created only, formatted as the year with
century.
- `{{ created_year_short }}`: Year created only, formatted as the year
without century, zero padded.
- `{{ created_month }}`: Month created only (number 01-12).
- `{{ created_month_name }}`: Month created name, as per locale
- `{{ created_month_name_short }}`: Month created abbreviated name, as per
locale
- `{{ created_day }}`: Day created only (number 01-31).
- `{{ added }}`: The full date (ISO format) the document was added to
paperless.
- `{{ added_year }}`: Year added only.
- `{{ added_year_short }}`: Year added only, formatted as the year without
century, zero padded.
- `{{ added_month }}`: Month added only (number 01-12).
- `{{ added_month_name }}`: Month added name, as per locale
- `{{ added_month_name_short }}`: Month added abbreviated name, as per
locale
- `{{ added_day }}`: Day added only (number 01-31).
- `{{ owner_username }}`: Username of document owner, if any, or "none"
- `{{ original_name }}`: Document original filename, minus the extension, if any, or "none"
- `{{ doc_pk }}`: The paperless identifier (primary key) for the document.
!!! warning
@@ -388,10 +388,10 @@ before empty placeholders are removed as well, empty directories are omitted.
When a single storage layout is not sufficient for your use case, storage paths allow for more complex
structure to set precisely where each document is stored in the file system.
- Each storage path is a [`PAPERLESS_FILENAME_FORMAT`](configuration.md#PAPERLESS_FILENAME_FORMAT) and
follows the rules described above
- Each document is assigned a storage path using the matching algorithms described above, but can be
overwritten at any time
- Each storage path is a [`PAPERLESS_FILENAME_FORMAT`](configuration.md#PAPERLESS_FILENAME_FORMAT) and
follows the rules described above
- Each document is assigned a storage path using the matching algorithms described above, but can be
overwritten at any time
For example, you could define the following two storage paths:
@@ -457,13 +457,13 @@ The `get_cf_value` filter retrieves a value from custom field data with optional
###### Parameters
- `custom_fields`: This _must_ be the provided custom field data
- `name` (str): Name of the custom field to retrieve
- `default` (str, optional): Default value to return if field is not found or has no value
- `custom_fields`: This _must_ be the provided custom field data
- `name` (str): Name of the custom field to retrieve
- `default` (str, optional): Default value to return if field is not found or has no value
###### Returns
- `str | None`: The field value, default value, or `None` if neither exists
- `str | None`: The field value, default value, or `None` if neither exists
###### Examples
@@ -487,12 +487,12 @@ The `datetime` filter formats a datetime string or datetime object using Python'
###### Parameters
- `value` (str | datetime): Date/time value to format (strings will be parsed automatically)
- `format` (str): Python strftime format string
- `value` (str | datetime): Date/time value to format (strings will be parsed automatically)
- `format` (str): Python strftime format string
###### Returns
- `str`: Formatted datetime string
- `str`: Formatted datetime string
###### Examples
@@ -525,13 +525,13 @@ An ISO string can also be provided to control the output format.
###### Parameters
- `value` (date | datetime | str): Date, datetime object or ISO string to format (datetime should be timezone-aware)
- `format` (str): Format type - either a Babel preset ('short', 'medium', 'long', 'full') or custom pattern
- `locale` (str): Locale code for localization (e.g., 'en_US', 'fr_FR', 'de_DE')
- `value` (date | datetime | str): Date, datetime object or ISO string to format (datetime should be timezone-aware)
- `format` (str): Format type - either a Babel preset ('short', 'medium', 'long', 'full') or custom pattern
- `locale` (str): Locale code for localization (e.g., 'en_US', 'fr_FR', 'de_DE')
###### Returns
- `str`: Localized, formatted date string
- `str`: Localized, formatted date string
###### Examples
@@ -565,15 +565,15 @@ See the [supported format codes](https://unicode.org/reports/tr35/tr35-dates.htm
### Format Presets
- **short**: Abbreviated format (e.g., "1/15/24")
- **medium**: Medium-length format (e.g., "Jan 15, 2024")
- **long**: Long format with full month name (e.g., "January 15, 2024")
- **full**: Full format including day of week (e.g., "Monday, January 15, 2024")
- **short**: Abbreviated format (e.g., "1/15/24")
- **medium**: Medium-length format (e.g., "Jan 15, 2024")
- **long**: Long format with full month name (e.g., "January 15, 2024")
- **full**: Full format including day of week (e.g., "Monday, January 15, 2024")
#### Additional Variables
- `{{ tag_name_list }}`: A list of tag names applied to the document, ordered by the tag name. Note this is a list, not a single string
- `{{ custom_fields }}`: A mapping of custom field names to their type and value. A user can access the mapping by field name or check if a field is applied by checking its existence in the variable.
- `{{ tag_name_list }}`: A list of tag names applied to the document, ordered by the tag name. Note this is a list, not a single string
- `{{ custom_fields }}`: A mapping of custom field names to their type and value. A user can access the mapping by field name or check if a field is applied by checking its existence in the variable.
!!! tip
@@ -675,15 +675,15 @@ installation, you can use volumes to accomplish this:
```yaml
services:
# ...
webserver:
environment:
- PAPERLESS_ENABLE_FLOWER
ports:
- 5555:5555 # (2)!
# ...
webserver:
environment:
- PAPERLESS_ENABLE_FLOWER
ports:
- 5555:5555 # (2)!
# ...
volumes:
- /path/to/my/flowerconfig.py:/usr/src/paperless/src/paperless/flowerconfig.py:ro # (1)!
volumes:
- /path/to/my/flowerconfig.py:/usr/src/paperless/src/paperless/flowerconfig.py:ro # (1)!
```
1. Note the `:ro` tag means the file will be mounted as read only.
@@ -714,15 +714,90 @@ For example, using Docker Compose:
```yaml
services:
# ...
webserver:
# ...
webserver:
# ...
volumes:
- /path/to/my/scripts:/custom-cont-init.d:ro # (1)!
volumes:
- /path/to/my/scripts:/custom-cont-init.d:ro # (1)!
```
1. Note the `:ro` tag means the folder will be mounted as read only. This is for extra security against changes
## Installing third-party parser plugins {#parser-plugins}
Third-party parser plugins extend Paperless-ngx to support additional file
formats. A plugin is a Python package that advertises itself under the
`paperless_ngx.parsers` entry point group. Refer to the
[developer documentation](development.md#making-custom-parsers) for how to
create one.
!!! warning "Third-party plugins are not officially supported"
The Paperless-ngx maintainers do not provide support for third-party
plugins. Issues caused by or requiring changes to a third-party plugin
will be closed without further investigation. Always reproduce problems
with all plugins removed before filing a bug report.
### Docker
Use a [custom container initialization script](#custom-container-initialization)
to install the package before the webserver starts. Create a shell script and
mount it into `/custom-cont-init.d`:
```bash
#!/bin/bash
# /path/to/my/scripts/install-parsers.sh
pip install my-paperless-parser-package
```
Mount it in your `docker-compose.yml`:
```yaml
services:
webserver:
# ...
volumes:
- /path/to/my/scripts:/custom-cont-init.d:ro
```
The script runs as `root` before the webserver starts, so the package will be
available when Paperless-ngx discovers plugins at startup.
### Bare metal
Install the package into the same Python environment that runs Paperless-ngx.
If you followed the standard bare-metal install guide, that is the `paperless`
user's environment:
```bash
sudo -Hu paperless pip3 install my-paperless-parser-package
```
If you are using `uv` or a virtual environment, activate it first and then run:
```bash
uv pip install my-paperless-parser-package
# or
pip install my-paperless-parser-package
```
Restart all Paperless-ngx services after installation so the new plugin is
discovered.
### Verifying installation
On the next startup, check the application logs for a line confirming
discovery:
```
Loaded third-party parser 'My Parser' v1.0.0 by Acme Corp (entrypoint: 'my_parser').
```
If this line does not appear, verify that the package is installed in the
correct environment and that its `pyproject.toml` declares the
`paperless_ngx.parsers` entry point.
## MySQL Caveats {#mysql-caveats}
### Case Sensitivity
@@ -771,16 +846,16 @@ Paperless is able to utilize barcodes for automatically performing some tasks.
At this time, the library utilized for detection of barcodes supports the following types:
- AN-13/UPC-A
- UPC-E
- EAN-8
- Code 128
- Code 93
- Code 39
- Codabar
- Interleaved 2 of 5
- QR Code
- SQ Code
- AN-13/UPC-A
- UPC-E
- EAN-8
- Code 128
- Code 93
- Code 39
- Codabar
- Interleaved 2 of 5
- QR Code
- SQ Code
For usage in Paperless, the type of barcode does not matter, only the contents of it.
@@ -793,8 +868,8 @@ below.
If document splitting is enabled, Paperless splits _after_ a separator barcode by default.
This means:
- any page containing the configured separator barcode starts a new document, starting with the **next** page
- pages containing the separator barcode are discarded
- any page containing the configured separator barcode starts a new document, starting with the **next** page
- pages containing the separator barcode are discarded
This is intended for dedicated separator sheets such as PATCH-T pages.
@@ -831,10 +906,10 @@ to `true`.
When enabled, documents will be split at pages containing tag barcodes, similar to how
ASN barcodes work. Key features:
- The page with the tag barcode is **retained** in the resulting document
- **Each split document extracts its own tags** - only tags on pages within that document are assigned
- Multiple tag barcodes can trigger multiple splits in the same document
- Works seamlessly with ASN barcodes - each split document gets its own ASN and tags
- The page with the tag barcode is **retained** in the resulting document
- **Each split document extracts its own tags** - only tags on pages within that document are assigned
- Multiple tag barcodes can trigger multiple splits in the same document
- Works seamlessly with ASN barcodes - each split document gets its own ASN and tags
This is useful for batch scanning where you place tag barcode pages between different
documents to both separate and categorize them in a single operation.
@@ -996,9 +1071,9 @@ If using docker, you'll need to add the following volume mounts to your `docker-
```yaml
webserver:
volumes:
- /home/user/.gnupg/pubring.gpg:/usr/src/paperless/.gnupg/pubring.gpg
- <path to gpg-agent socket>:/usr/src/paperless/.gnupg/S.gpg-agent
volumes:
- /home/user/.gnupg/pubring.gpg:/usr/src/paperless/.gnupg/pubring.gpg
- <path to gpg-agent socket>:/usr/src/paperless/.gnupg/S.gpg-agent
```
For a 'bare-metal' installation no further configuration is necessary. If you
@@ -1006,9 +1081,9 @@ want to use a separate `GNUPG_HOME`, you can do so by configuring the [PAPERLESS
### Troubleshooting
- Make sure, that `gpg-agent` is running on your host machine
- Make sure, that encryption and decryption works from inside the container using the `gpg` commands from above.
- Check that all files in `/usr/src/paperless/.gnupg` have correct permissions
- Make sure, that `gpg-agent` is running on your host machine
- Make sure, that encryption and decryption works from inside the container using the `gpg` commands from above.
- Check that all files in `/usr/src/paperless/.gnupg` have correct permissions
```shell
paperless@9da1865df327:~/.gnupg$ ls -al

View File

@@ -66,10 +66,10 @@ Full text searching is available on the `/api/documents/` endpoint. Two
specific query parameters cause the API to return full text search
results:
- `/api/documents/?query=your%20search%20query`: Search for a document
using a full text query. For details on the syntax, see [Basic Usage - Searching](usage.md#basic-usage_searching).
- `/api/documents/?more_like_id=1234`: Search for documents similar to
the document with id 1234.
- `/api/documents/?query=your%20search%20query`: Search for a document
using a full text query. For details on the syntax, see [Basic Usage - Searching](usage.md#basic-usage_searching).
- `/api/documents/?more_like_id=1234`: Search for documents similar to
the document with id 1234.
Pagination works exactly the same as it does for normal requests on this
endpoint.
@@ -106,12 +106,12 @@ attribute with various information about the search results:
}
```
- `score` is an indication how well this document matches the query
relative to the other search results.
- `highlights` is an excerpt from the document content and highlights
the search terms with `<span>` tags as shown above.
- `rank` is the index of the search results. The first result will
have rank 0.
- `score` is an indication how well this document matches the query
relative to the other search results.
- `highlights` is an excerpt from the document content and highlights
the search terms with `<span>` tags as shown above.
- `rank` is the index of the search results. The first result will
have rank 0.
### Filtering by custom fields
@@ -122,33 +122,33 @@ use cases:
1. Documents with a custom field "due" (date) between Aug 1, 2024 and
Sept 1, 2024 (inclusive):
`?custom_field_query=["due", "range", ["2024-08-01", "2024-09-01"]]`
`?custom_field_query=["due", "range", ["2024-08-01", "2024-09-01"]]`
2. Documents with a custom field "customer" (text) that equals "bob"
(case sensitive):
`?custom_field_query=["customer", "exact", "bob"]`
`?custom_field_query=["customer", "exact", "bob"]`
3. Documents with a custom field "answered" (boolean) set to `true`:
`?custom_field_query=["answered", "exact", true]`
`?custom_field_query=["answered", "exact", true]`
4. Documents with a custom field "favorite animal" (select) set to either
"cat" or "dog":
`?custom_field_query=["favorite animal", "in", ["cat", "dog"]]`
`?custom_field_query=["favorite animal", "in", ["cat", "dog"]]`
5. Documents with a custom field "address" (text) that is empty:
`?custom_field_query=["OR", [["address", "isnull", true], ["address", "exact", ""]]]`
`?custom_field_query=["OR", [["address", "isnull", true], ["address", "exact", ""]]]`
6. Documents that don't have a field called "foo":
`?custom_field_query=["foo", "exists", false]`
`?custom_field_query=["foo", "exists", false]`
7. Documents that have document links "references" to both document 3 and 7:
`?custom_field_query=["references", "contains", [3, 7]]`
`?custom_field_query=["references", "contains", [3, 7]]`
All field types support basic operations including `exact`, `in`, `isnull`,
and `exists`. String, URL, and monetary fields support case-insensitive
@@ -164,8 +164,8 @@ Get auto completions for a partial search term.
Query parameters:
- `term`: The incomplete term.
- `limit`: Amount of results. Defaults to 10.
- `term`: The incomplete term.
- `limit`: Amount of results. Defaults to 10.
Results returned by the endpoint are ordered by importance of the term
in the document index. The first result is the term that has the highest
@@ -189,19 +189,19 @@ from there.
The endpoint supports the following optional form fields:
- `title`: Specify a title that the consumer should use for the
document.
- `created`: Specify a DateTime where the document was created (e.g.
"2016-04-19" or "2016-04-19 06:15:00+02:00").
- `correspondent`: Specify the ID of a correspondent that the consumer
should use for the document.
- `document_type`: Similar to correspondent.
- `storage_path`: Similar to correspondent.
- `tags`: Similar to correspondent. Specify this multiple times to
have multiple tags added to the document.
- `archive_serial_number`: An optional archive serial number to set.
- `custom_fields`: Either an array of custom field ids to assign (with an empty
value) to the document or an object mapping field id -> value.
- `title`: Specify a title that the consumer should use for the
document.
- `created`: Specify a DateTime where the document was created (e.g.
"2016-04-19" or "2016-04-19 06:15:00+02:00").
- `correspondent`: Specify the ID of a correspondent that the consumer
should use for the document.
- `document_type`: Similar to correspondent.
- `storage_path`: Similar to correspondent.
- `tags`: Similar to correspondent. Specify this multiple times to
have multiple tags added to the document.
- `archive_serial_number`: An optional archive serial number to set.
- `custom_fields`: Either an array of custom field ids to assign (with an empty
value) to the document or an object mapping field id -> value.
The endpoint will immediately return HTTP 200 if the document consumption
process was started successfully, with the UUID of the consumption task
@@ -215,16 +215,16 @@ consumption including the ID of a created document if consumption succeeded.
Document versions are file-level versions linked to one root document.
- Root document metadata (title, tags, correspondent, document type, storage path, custom fields, permissions) remains shared.
- Version-specific file data (file, mime type, checksums, archive info, extracted text content) belongs to the selected/latest version.
- Root document metadata (title, tags, correspondent, document type, storage path, custom fields, permissions) remains shared.
- Version-specific file data (file, mime type, checksums, archive info, extracted text content) belongs to the selected/latest version.
Version-aware endpoints:
- `GET /api/documents/{id}/`: returns root document data; `content` resolves to latest version content by default. Use `?version={version_id}` to resolve content for a specific version.
- `PATCH /api/documents/{id}/`: content updates target the selected version (`?version={version_id}`) or latest version by default; non-content metadata updates target the root document.
- `GET /api/documents/{id}/download/`, `GET /api/documents/{id}/preview/`, `GET /api/documents/{id}/thumb/`, `GET /api/documents/{id}/metadata/`: accept `?version={version_id}`.
- `POST /api/documents/{id}/update_version/`: uploads a new version using multipart form field `document` and optional `version_label`.
- `DELETE /api/documents/{root_id}/versions/{version_id}/`: deletes a non-root version.
- `GET /api/documents/{id}/`: returns root document data; `content` resolves to latest version content by default. Use `?version={version_id}` to resolve content for a specific version.
- `PATCH /api/documents/{id}/`: content updates target the selected version (`?version={version_id}`) or latest version by default; non-content metadata updates target the root document.
- `GET /api/documents/{id}/download/`, `GET /api/documents/{id}/preview/`, `GET /api/documents/{id}/thumb/`, `GET /api/documents/{id}/metadata/`: accept `?version={version_id}`.
- `POST /api/documents/{id}/update_version/`: uploads a new version using multipart form field `document` and optional `version_label`.
- `DELETE /api/documents/{root_id}/versions/{version_id}/`: deletes a non-root version.
## Permissions
@@ -282,74 +282,38 @@ a json payload of the format:
The following methods are supported:
- `set_correspondent`
- Requires `parameters`: `{ "correspondent": CORRESPONDENT_ID }`
- `set_document_type`
- Requires `parameters`: `{ "document_type": DOCUMENT_TYPE_ID }`
- `set_storage_path`
- Requires `parameters`: `{ "storage_path": STORAGE_PATH_ID }`
- `add_tag`
- Requires `parameters`: `{ "tag": TAG_ID }`
- `remove_tag`
- Requires `parameters`: `{ "tag": TAG_ID }`
- `modify_tags`
- Requires `parameters`: `{ "add_tags": [LIST_OF_TAG_IDS] }` and `{ "remove_tags": [LIST_OF_TAG_IDS] }`
- `delete`
- No `parameters` required
- `reprocess`
- No `parameters` required
- `set_permissions`
- Requires `parameters`:
- `"set_permissions": PERMISSIONS_OBJ` (see format [above](#permissions)) and / or
- `"owner": OWNER_ID or null`
- `"merge": true or false` (defaults to false)
- The `merge` flag determines if the supplied permissions will overwrite all existing permissions (including
removing them) or be merged with existing permissions.
- `edit_pdf`
- Requires `parameters`:
- `"doc_ids": [DOCUMENT_ID]` A list of a single document ID to edit.
- `"operations": [OPERATION, ...]` A list of operations to perform on the documents. Each operation is a dictionary
with the following keys:
- `"page": PAGE_NUMBER` The page number to edit (1-based).
- `"rotate": DEGREES` Optional rotation in degrees (90, 180, 270).
- `"doc": OUTPUT_DOCUMENT_INDEX` Optional index of the output document for split operations.
- Optional `parameters`:
- `"delete_original": true` to delete the original documents after editing.
- `"update_document": true` to add the edited PDF as a new version of the root document.
- `"include_metadata": true` to copy metadata from the original document to the edited document.
- `remove_password`
- Requires `parameters`:
- `"password": "PASSWORD_STRING"` The password to remove from the PDF documents.
- Optional `parameters`:
- `"update_document": true` to add the password-less PDF as a new version of the root document.
- `"delete_original": true` to delete the original document after editing.
- `"include_metadata": true` to copy metadata from the original document to the new password-less document.
- `merge`
- No additional `parameters` required.
- The ordering of the merged document is determined by the list of IDs.
- Optional `parameters`:
- `"metadata_document_id": DOC_ID` apply metadata (tags, correspondent, etc.) from this document to the merged document.
- `"delete_originals": true` to delete the original documents. This requires the calling user being the owner of
all documents that are merged.
- `split`
- Requires `parameters`:
- `"pages": [..]` The list should be a list of pages and/or a ranges, separated by commas e.g. `"[1,2-3,4,5-7]"`
- Optional `parameters`:
- `"delete_originals": true` to delete the original document after consumption. This requires the calling user being the owner of
the document.
- The split operation only accepts a single document.
- `rotate`
- Requires `parameters`:
- `"degrees": DEGREES`. Must be an integer i.e. 90, 180, 270
- `delete_pages`
- Requires `parameters`:
- `"pages": [..]` The list should be a list of integers e.g. `"[2,3,4]"`
- The delete_pages operation only accepts a single document.
- `modify_custom_fields`
- Requires `parameters`:
- `"add_custom_fields": { CUSTOM_FIELD_ID: VALUE }`: JSON object consisting of custom field id:value pairs to add to the document, can also be a list of custom field IDs
to add with empty values.
- `"remove_custom_fields": [CUSTOM_FIELD_ID]`: custom field ids to remove from the document.
- `set_correspondent`
- Requires `parameters`: `{ "correspondent": CORRESPONDENT_ID }`
- `set_document_type`
- Requires `parameters`: `{ "document_type": DOCUMENT_TYPE_ID }`
- `set_storage_path`
- Requires `parameters`: `{ "storage_path": STORAGE_PATH_ID }`
- `add_tag`
- Requires `parameters`: `{ "tag": TAG_ID }`
- `remove_tag`
- Requires `parameters`: `{ "tag": TAG_ID }`
- `modify_tags`
- Requires `parameters`: `{ "add_tags": [LIST_OF_TAG_IDS] }` and `{ "remove_tags": [LIST_OF_TAG_IDS] }`
- `delete`
- No `parameters` required
- `reprocess`
- No `parameters` required
- `set_permissions`
- Requires `parameters`:
- `"set_permissions": PERMISSIONS_OBJ` (see format [above](#permissions)) and / or
- `"owner": OWNER_ID or null`
- `"merge": true or false` (defaults to false)
- The `merge` flag determines if the supplied permissions will overwrite all existing permissions (including
removing them) or be merged with existing permissions.
- `modify_custom_fields`
- Requires `parameters`:
- `"add_custom_fields": { CUSTOM_FIELD_ID: VALUE }`: JSON object consisting of custom field id:value pairs to add to the document, can also be a list of custom field IDs
to add with empty values.
- `"remove_custom_fields": [CUSTOM_FIELD_ID]`: custom field ids to remove from the document.
#### Document-editing operations
Beginning with version 10+, the API supports individual endpoints for document-editing operations (`merge`, `rotate`, `edit_pdf`, etc), thus their documentation can be found in the API spec / viewer. Legacy document-editing methods via `/api/documents/bulk_edit/` are still supported for compatibility, are deprecated and clients should migrate to the individual endpoints before they are removed in a future version.
### Objects
@@ -369,41 +333,38 @@ operations, using the endpoint: `/api/bulk_edit_objects/`, which requires a json
## API Versioning
The REST API is versioned since Paperless-ngx 1.3.0.
The REST API is versioned.
- Versioning ensures that changes to the API don't break older
clients.
- Clients specify the specific version of the API they wish to use
with every request and Paperless will handle the request using the
specified API version.
- Even if the underlying data model changes, older API versions will
always serve compatible data.
- If no version is specified, Paperless will serve version 1 to ensure
compatibility with older clients that do not request a specific API
version.
- Versioning ensures that changes to the API don't break older
clients.
- Clients specify the specific version of the API they wish to use
with every request and Paperless will handle the request using the
specified API version.
- Even if the underlying data model changes, supported older API
versions continue to serve compatible data.
- If no version is specified, Paperless serves the configured default
API version (currently `10`).
- Supported API versions are currently `9` and `10`.
API versions are specified by submitting an additional HTTP `Accept`
header with every request:
```
Accept: application/json; version=6
Accept: application/json; version=10
```
If an invalid version is specified, Paperless 1.3.0 will respond with
"406 Not Acceptable" and an error message in the body. Earlier
versions of Paperless will serve API version 1 regardless of whether a
version is specified via the `Accept` header.
If an invalid version is specified, Paperless responds with
`406 Not Acceptable` and an error message in the body.
If a client wishes to verify whether it is compatible with any given
server, the following procedure should be performed:
1. Perform an _authenticated_ request against any API endpoint. If the
server is on version 1.3.0 or newer, the server will add two custom
headers to the response:
1. Perform an _authenticated_ request against any API endpoint. The
server will add two custom headers to the response:
```
X-Api-Version: 2
X-Version: 1.3.0
X-Api-Version: 10
X-Version: <server-version>
```
2. Determine whether the client is compatible with this server based on
@@ -423,51 +384,56 @@ Initial API version.
#### Version 2
- Added field `Tag.color`. This read/write string field contains a hex
color such as `#a6cee3`.
- Added read-only field `Tag.text_color`. This field contains the text
color to use for a specific tag, which is either black or white
depending on the brightness of `Tag.color`.
- Removed field `Tag.colour`.
- Added field `Tag.color`. This read/write string field contains a hex
color such as `#a6cee3`.
- Added read-only field `Tag.text_color`. This field contains the text
color to use for a specific tag, which is either black or white
depending on the brightness of `Tag.color`.
- Removed field `Tag.colour`.
#### Version 3
- Permissions endpoints have been added.
- The format of the `/api/ui_settings/` has changed.
- Permissions endpoints have been added.
- The format of the `/api/ui_settings/` has changed.
#### Version 4
- Consumption templates were refactored to workflows and API endpoints
changed as such.
- Consumption templates were refactored to workflows and API endpoints
changed as such.
#### Version 5
- Added bulk deletion methods for documents and objects.
- Added bulk deletion methods for documents and objects.
#### Version 6
- Moved acknowledge tasks endpoint to be under `/api/tasks/acknowledge/`.
- Moved acknowledge tasks endpoint to be under `/api/tasks/acknowledge/`.
#### Version 7
- The format of select type custom fields has changed to return the options
as an array of objects with `id` and `label` fields as opposed to a simple
list of strings. When creating or updating a custom field value of a
document for a select type custom field, the value should be the `id` of
the option whereas previously was the index of the option.
- The format of select type custom fields has changed to return the options
as an array of objects with `id` and `label` fields as opposed to a simple
list of strings. When creating or updating a custom field value of a
document for a select type custom field, the value should be the `id` of
the option whereas previously was the index of the option.
#### Version 8
- The user field of document notes now returns a simplified user object
rather than just the user ID.
- The user field of document notes now returns a simplified user object
rather than just the user ID.
#### Version 9
- The document `created` field is now a date, not a datetime. The
`created_date` field is considered deprecated and will be removed in a
future version.
- The document `created` field is now a date, not a datetime. The
`created_date` field is considered deprecated and will be removed in a
future version.
#### Version 10
- The `show_on_dashboard` and `show_in_sidebar` fields of saved views have been
removed. Relevant settings are now stored in the UISettings model.
- The `show_on_dashboard` and `show_in_sidebar` fields of saved views have been
removed. Relevant settings are now stored in the UISettings model. Compatibility is maintained
for versions < 10 until support for API v9 is dropped.
- Document-editing operations such as `merge`, `rotate`, and `edit_pdf` have been
moved from the bulk edit endpoint to their own individual endpoints. Using these methods via
the bulk edit endpoint is still supported for compatibility with versions < 10 until support
for API v9 is dropped.

File diff suppressed because it is too large Load Diff

View File

@@ -8,17 +8,17 @@ common [OCR](#ocr) related settings and some frontend settings. If set, these wi
preference over the settings via environment variables. If not set, the environment setting
or applicable default will be utilized instead.
- If you run paperless on docker, `paperless.conf` is not used.
Rather, configure paperless by copying necessary options to
`docker-compose.env`.
- If you run paperless on docker, `paperless.conf` is not used.
Rather, configure paperless by copying necessary options to
`docker-compose.env`.
- If you are running paperless on anything else, paperless will search
for the configuration file in these locations and use the first one
it finds:
- The environment variable `PAPERLESS_CONFIGURATION_PATH`
- `/path/to/paperless/paperless.conf`
- `/etc/paperless.conf`
- `/usr/local/etc/paperless.conf`
- If you are running paperless on anything else, paperless will search
for the configuration file in these locations and use the first one
it finds:
- The environment variable `PAPERLESS_CONFIGURATION_PATH`
- `/path/to/paperless/paperless.conf`
- `/etc/paperless.conf`
- `/usr/local/etc/paperless.conf`
## Required services
@@ -674,6 +674,9 @@ See the corresponding [django-allauth documentation](https://docs.allauth.org/en
for a list of provider configurations. You will also need to include the relevant Django 'application' inside the
[PAPERLESS_APPS](#PAPERLESS_APPS) setting to activate that specific authentication provider (e.g. `allauth.socialaccount.providers.openid_connect` for the [OIDC Connect provider](https://docs.allauth.org/en/latest/socialaccount/providers/openid_connect.html)).
: For OpenID Connect providers, set `settings.token_auth_method` if your identity provider
requires a specific token endpoint authentication method.
Defaults to None, which does not enable any third party authentication systems.
#### [`PAPERLESS_SOCIAL_AUTO_SIGNUP=<bool>`](#PAPERLESS_SOCIAL_AUTO_SIGNUP) {#PAPERLESS_SOCIAL_AUTO_SIGNUP}
@@ -1947,6 +1950,12 @@ current backend. If not supplied, defaults to "gpt-3.5-turbo" for OpenAI and "ll
Defaults to None.
#### [`PAPERLESS_AI_LLM_ALLOW_INTERNAL_ENDPOINTS=<bool>`](#PAPERLESS_AI_LLM_ALLOW_INTERNAL_ENDPOINTS) {#PAPERLESS_AI_LLM_ALLOW_INTERNAL_ENDPOINTS}
: If set to false, Paperless blocks AI endpoint URLs that resolve to non-public addresses (e.g., localhost, etc).
Defaults to true, which allows internal endpoints.
#### [`PAPERLESS_AI_LLM_INDEX_TASK_CRON=<cron expression>`](#PAPERLESS_AI_LLM_INDEX_TASK_CRON) {#PAPERLESS_AI_LLM_INDEX_TASK_CRON}
: Configures the schedule to update the AI embeddings of text content and metadata for all documents. Only performed if

View File

@@ -6,23 +6,23 @@ on Paperless-ngx.
Check out the source from GitHub. The repository is organized in the
following way:
- `main` always represents the latest release and will only see
changes when a new release is made.
- `dev` contains the code that will be in the next release.
- `feature-X` contains bigger changes that will be in some release, but
not necessarily the next one.
- `main` always represents the latest release and will only see
changes when a new release is made.
- `dev` contains the code that will be in the next release.
- `feature-X` contains bigger changes that will be in some release, but
not necessarily the next one.
When making functional changes to Paperless-ngx, _always_ make your changes
on the `dev` branch.
Apart from that, the folder structure is as follows:
- `docs/` - Documentation.
- `src-ui/` - Code of the front end.
- `src/` - Code of the back end.
- `scripts/` - Various scripts that help with different parts of
development.
- `docker/` - Files required to build the docker image.
- `docs/` - Documentation.
- `src-ui/` - Code of the front end.
- `src/` - Code of the back end.
- `scripts/` - Various scripts that help with different parts of
development.
- `docker/` - Files required to build the docker image.
## Contributing to Paperless-ngx
@@ -94,18 +94,17 @@ first-time setup.
```
7. You can now either ...
- install Redis or
- install Redis or
- use the included `scripts/start_services.sh` to use Docker to fire
up a Redis instance (and some other services such as Tika,
Gotenberg and a database server) or
- use the included `scripts/start_services.sh` to use Docker to fire
up a Redis instance (and some other services such as Tika,
Gotenberg and a database server) or
- spin up a bare Redis container
- spin up a bare Redis container
```bash
docker run -d -p 6379:6379 --restart unless-stopped redis:latest
```
```bash
docker run -d -p 6379:6379 --restart unless-stopped redis:latest
```
8. Continue with either back-end or front-end development or both :-).
@@ -118,9 +117,9 @@ work well for development, but you can use whatever you want.
Configure the IDE to use the `src/`-folder as the base source folder.
Configure the following launch configurations in your IDE:
- `uv run manage.py runserver`
- `uv run manage.py document_consumer`
- `uv run celery --app paperless worker -l DEBUG` (or any other log level)
- `uv run manage.py runserver`
- `uv run manage.py document_consumer`
- `uv run celery --app paperless worker -l DEBUG` (or any other log level)
To start them all:
@@ -146,11 +145,11 @@ pnpm ng build --configuration production
### Testing
- Run `pytest` in the `src/` directory to execute all tests. This also
generates a HTML coverage report. When running tests, `paperless.conf`
is loaded as well. However, the tests rely on the default
configuration. This is not ideal. But for now, make sure no settings
except for DEBUG are overridden when testing.
- Run `pytest` in the `src/` directory to execute all tests. This also
generates a HTML coverage report. When running tests, `paperless.conf`
is loaded as well. However, the tests rely on the default
configuration. This is not ideal. But for now, make sure no settings
except for DEBUG are overridden when testing.
!!! note
@@ -254,14 +253,14 @@ these parts have to be translated separately.
### Front end localization
- The AngularJS front end does localization according to the [Angular
documentation](https://angular.io/guide/i18n).
- The source language of the project is "en_US".
- The source strings end up in the file `src-ui/messages.xlf`.
- The translated strings need to be placed in the
`src-ui/src/locale/` folder.
- In order to extract added or changed strings from the source files,
call `ng extract-i18n`.
- The AngularJS front end does localization according to the [Angular
documentation](https://angular.io/guide/i18n).
- The source language of the project is "en_US".
- The source strings end up in the file `src-ui/messages.xlf`.
- The translated strings need to be placed in the
`src-ui/src/locale/` folder.
- In order to extract added or changed strings from the source files,
call `ng extract-i18n`.
Adding new languages requires adding the translated files in the
`src-ui/src/locale/` folder and adjusting a couple files.
@@ -307,18 +306,18 @@ A majority of the strings that appear in the back end appear only when
the admin is used. However, some of these are still shown on the front
end (such as error messages).
- The django application does localization according to the [Django
documentation](https://docs.djangoproject.com/en/3.1/topics/i18n/translation/).
- The source language of the project is "en_US".
- Localization files end up in the folder `src/locale/`.
- In order to extract strings from the application, call
`uv run manage.py makemessages -l en_US`. This is important after
making changes to translatable strings.
- The message files need to be compiled for them to show up in the
application. Call `uv run manage.py compilemessages` to do this.
The generated files don't get committed into git, since these are
derived artifacts. The build pipeline takes care of executing this
command.
- The django application does localization according to the [Django
documentation](https://docs.djangoproject.com/en/3.1/topics/i18n/translation/).
- The source language of the project is "en_US".
- Localization files end up in the folder `src/locale/`.
- In order to extract strings from the application, call
`uv run manage.py makemessages -l en_US`. This is important after
making changes to translatable strings.
- The message files need to be compiled for them to show up in the
application. Call `uv run manage.py compilemessages` to do this.
The generated files don't get committed into git, since these are
derived artifacts. The build pipeline takes care of executing this
command.
Adding new languages requires adding the translated files in the
`src/locale/`-folder and adjusting the file
@@ -371,122 +370,363 @@ docker build --file Dockerfile --tag paperless:local .
## Extending Paperless-ngx
Paperless-ngx does not have any fancy plugin systems and will probably never
have. However, some parts of the application have been designed to allow
easy integration of additional features without any modification to the
base code.
Paperless-ngx supports third-party document parsers via a Python entry point
plugin system. Plugins are distributed as ordinary Python packages and
discovered automatically at startup — no changes to the Paperless-ngx source
are required.
!!! warning "Third-party plugins are not officially supported"
The Paperless-ngx maintainers do not provide support for third-party
plugins. Issues that are caused by or require changes to a third-party
plugin will be closed without further investigation. If you believe you
have found a bug in Paperless-ngx itself (not in a plugin), please
reproduce it with all third-party plugins removed before filing an issue.
### Making custom parsers
Paperless-ngx uses parsers to add documents. A parser is
responsible for:
Paperless-ngx uses parsers to add documents. A parser is responsible for:
- Retrieving the content from the original
- Creating a thumbnail
- _optional:_ Retrieving a created date from the original
- _optional:_ Creating an archived document from the original
- Extracting plain-text content from the document
- Generating a thumbnail image
- _optional:_ Detecting the document's creation date
- _optional:_ Producing a searchable PDF archive copy
Custom parsers can be added to Paperless-ngx to support more file types. In
order to do that, you need to write the parser itself and announce its
existence to Paperless-ngx.
Custom parsers are distributed as ordinary Python packages and registered
via a [setuptools entry point](https://setuptools.pypa.io/en/latest/userguide/entry_point.html).
No changes to the Paperless-ngx source are required.
The parser itself must extend `documents.parsers.DocumentParser` and
must implement the methods `parse` and `get_thumbnail`. You can provide
your own implementation to `get_date` if you don't want to rely on
Paperless-ngx' default date guessing mechanisms.
#### 1. Implementing the parser class
Your parser must satisfy the `ParserProtocol` structural interface defined in
`paperless.parsers`. The simplest approach is to write a plain class — no base
class is required, only the right attributes and methods.
**Class-level identity attributes**
The registry reads these before instantiating the parser, so they must be
plain class attributes (not instance attributes or properties):
```python
class MyCustomParser(DocumentParser):
def parse(self, document_path, mime_type):
# This method does not return anything. Rather, you should assign
# whatever you got from the document to the following fields:
# The content of the document.
self.text = "content"
# Optional: path to a PDF document that you created from the original.
self.archive_path = os.path.join(self.tempdir, "archived.pdf")
# Optional: "created" date of the document.
self.date = get_created_from_metadata(document_path)
def get_thumbnail(self, document_path, mime_type):
# This should return the path to a thumbnail you created for this
# document.
return os.path.join(self.tempdir, "thumb.webp")
class MyCustomParser:
name = "My Format Parser" # human-readable name shown in logs
version = "1.0.0" # semantic version string
author = "Acme Corp" # author / organisation
url = "https://example.com/my-parser" # docs or issue tracker
```
If you encounter any issues during parsing, raise a
`documents.parsers.ParseError`.
**Declaring supported MIME types**
The `self.tempdir` directory is a temporary directory that is guaranteed
to be empty and removed after consumption finished. You can use that
directory to store any intermediate files and also use it to store the
thumbnail / archived document.
After that, you need to announce your parser to Paperless-ngx. You need to
connect a handler to the `document_consumer_declaration` signal. Have a
look in the file `src/paperless_tesseract/apps.py` on how that's done.
The handler is a method that returns information about your parser:
Return a `dict` mapping MIME type strings to preferred file extensions
(including the leading dot). Paperless-ngx uses the extension when storing
archive copies and serving files for download.
```python
def myparser_consumer_declaration(sender, **kwargs):
@classmethod
def supported_mime_types(cls) -> dict[str, str]:
return {
"parser": MyCustomParser,
"weight": 0,
"mime_types": {
"application/pdf": ".pdf",
"image/jpeg": ".jpg",
}
"application/x-my-format": ".myf",
"application/x-my-format-alt": ".myf",
}
```
- `parser` is a reference to a class that extends `DocumentParser`.
- `weight` is used whenever two or more parsers are able to parse a
file: The parser with the higher weight wins. This can be used to
override the parsers provided by Paperless-ngx.
- `mime_types` is a dictionary. The keys are the mime types your
parser supports and the value is the default file extension that
Paperless-ngx should use when storing files and serving them for
download. We could guess that from the file extensions, but some
mime types have many extensions associated with them and the Python
methods responsible for guessing the extension do not always return
the same value.
**Scoring**
## Using Visual Studio Code devcontainer
When more than one parser can handle a file, the registry calls `score()` on
each candidate and picks the one with the highest result. Return `None` to
decline handling a file even though the MIME type is listed as supported (for
example, when a required external service is not configured).
Another easy way to get started with development is to use Visual Studio
Code devcontainers. This approach will create a preconfigured development
environment with all of the required tools and dependencies.
[Learn more about devcontainers](https://code.visualstudio.com/docs/devcontainers/containers).
The .devcontainer/vscode/tasks.json and .devcontainer/vscode/launch.json files
contain more information about the specific tasks and launch configurations (see the
non-standard "description" field).
| Score | Meaning |
| ------ | ------------------------------------------------- |
| `None` | Decline — do not handle this file |
| `10` | Default priority used by all built-in parsers |
| `> 10` | Override a built-in parser for the same MIME type |
To get started:
```python
@classmethod
def score(
cls,
mime_type: str,
filename: str,
path: "Path | None" = None,
) -> int | None:
# Inspect filename or file bytes here if needed.
return 10
```
1. Clone the repository on your machine and open the Paperless-ngx folder in VS Code.
**Archive and rendition flags**
2. VS Code will prompt you with "Reopen in container". Do so and wait for the environment to start.
```python
@property
def can_produce_archive(self) -> bool:
"""True if parse() can produce a searchable PDF archive copy."""
return True # or False if your parser doesn't produce PDFs
3. In case your host operating system is Windows:
@property
def requires_pdf_rendition(self) -> bool:
"""True if the original format cannot be displayed by a browser
(e.g. DOCX, ODT) and the PDF output must always be kept."""
return False
```
- The Source Control view in Visual Studio Code might show: "The detected Git repository is potentially unsafe as the folder is owned by someone other than the current user." Use "Manage Unsafe Repositories" to fix this.
- Git might have detecteded modifications for all files, because Windows is using CRLF line endings. Run `git checkout .` in the containers terminal to fix this issue.
**Context manager — temp directory lifecycle**
4. Initialize the project by running the task **Project Setup: Run all Init Tasks**. This
will initialize the database tables and create a superuser. Then you can compile the front end
for production or run the frontend in debug mode.
Paperless-ngx always uses parsers as context managers. Create a temporary
working directory in `__enter__` (or `__init__`) and remove it in `__exit__`
regardless of whether an exception occurred. Store intermediate files,
thumbnails, and archive PDFs inside this directory.
5. The project is ready for debugging, start either run the fullstack debug or individual debug
processes. Yo spin up the project without debugging run the task **Project Start: Run all Services**
```python
import shutil
import tempfile
from pathlib import Path
from typing import Self
from types import TracebackType
## Developing Date Parser Plugins
from django.conf import settings
class MyCustomParser:
...
def __init__(self, logging_group: object = None) -> None:
settings.SCRATCH_DIR.mkdir(parents=True, exist_ok=True)
self._tempdir = Path(
tempfile.mkdtemp(prefix="paperless-", dir=settings.SCRATCH_DIR)
)
self._text: str | None = None
self._archive_path: Path | None = None
def __enter__(self) -> Self:
return self
def __exit__(
self,
exc_type: type[BaseException] | None,
exc_val: BaseException | None,
exc_tb: TracebackType | None,
) -> None:
shutil.rmtree(self._tempdir, ignore_errors=True)
```
**Optional context — `configure()`**
The consumer calls `configure()` with a `ParserContext` after instantiation
and before `parse()`. If your parser doesn't need context, a no-op
implementation is fine:
```python
from paperless.parsers import ParserContext
def configure(self, context: ParserContext) -> None:
pass # override if you need context.mailrule_id, etc.
```
**Parsing**
`parse()` is the core method. It must not return a value; instead, store
results in instance attributes and expose them via the accessor methods below.
Raise `documents.parsers.ParseError` on any unrecoverable failure.
```python
from documents.parsers import ParseError
def parse(
self,
document_path: Path,
mime_type: str,
*,
produce_archive: bool = True,
) -> None:
try:
self._text = extract_text_from_my_format(document_path)
except Exception as e:
raise ParseError(f"Failed to parse {document_path}: {e}") from e
if produce_archive and self.can_produce_archive:
archive = self._tempdir / "archived.pdf"
convert_to_pdf(document_path, archive)
self._archive_path = archive
```
**Result accessors**
```python
def get_text(self) -> str | None:
return self._text
def get_date(self) -> "datetime.datetime | None":
# Return a datetime extracted from the document, or None to let
# Paperless-ngx use its default date-guessing logic.
return None
def get_archive_path(self) -> Path | None:
return self._archive_path
```
**Thumbnail**
`get_thumbnail()` may be called independently of `parse()`. Return the path
to a WebP image inside `self._tempdir`. The image should be roughly 500 × 700
pixels.
```python
def get_thumbnail(self, document_path: Path, mime_type: str) -> Path:
thumb = self._tempdir / "thumb.webp"
render_thumbnail(document_path, thumb)
return thumb
```
**Optional methods**
These are called by the API on demand, not during the consumption pipeline.
Implement them if your format supports the information; otherwise return
`None` / `[]`.
```python
def get_page_count(self, document_path: Path, mime_type: str) -> int | None:
return count_pages(document_path)
def extract_metadata(
self,
document_path: Path,
mime_type: str,
) -> "list[MetadataEntry]":
# Must never raise. Return [] if metadata cannot be read.
from paperless.parsers import MetadataEntry
return [
MetadataEntry(
namespace="https://example.com/ns/",
prefix="ex",
key="Author",
value="Alice",
)
]
```
#### 2. Registering via entry point
Add the following to your package's `pyproject.toml`. The key (left of `=`)
is an arbitrary name used only in log output; the value is the
`module:ClassName` import path.
```toml
[project.entry-points."paperless_ngx.parsers"]
my_parser = "my_package.parsers:MyCustomParser"
```
Install your package into the same Python environment as Paperless-ngx (or
add it to the Docker image), and the parser will be discovered automatically
on the next startup. No configuration changes are needed.
To verify discovery, check the application logs at startup for a line like:
```
Loaded third-party parser 'My Format Parser' v1.0.0 by Acme Corp (entrypoint: 'my_parser').
```
#### 3. Utilities
`paperless.parsers.utils` provides helpers you can import directly:
| Function | Description |
| --------------------------------------- | ---------------------------------------------------------------- |
| `read_file_handle_unicode_errors(path)` | Read a file as UTF-8, replacing invalid bytes instead of raising |
| `get_page_count_for_pdf(path)` | Count pages in a PDF using pikepdf |
| `extract_pdf_metadata(path)` | Extract XMP metadata from a PDF as a `list[MetadataEntry]` |
#### Minimal example
A complete, working parser for a hypothetical plain-XML format:
```python
from __future__ import annotations
import shutil
import tempfile
from pathlib import Path
from typing import Self
from types import TracebackType
import xml.etree.ElementTree as ET
from django.conf import settings
from documents.parsers import ParseError
from paperless.parsers import ParserContext
class XmlDocumentParser:
name = "XML Parser"
version = "1.0.0"
author = "Acme Corp"
url = "https://example.com/xml-parser"
@classmethod
def supported_mime_types(cls) -> dict[str, str]:
return {"application/xml": ".xml", "text/xml": ".xml"}
@classmethod
def score(cls, mime_type: str, filename: str, path: Path | None = None) -> int | None:
return 10
@property
def can_produce_archive(self) -> bool:
return False
@property
def requires_pdf_rendition(self) -> bool:
return False
def __init__(self, logging_group: object = None) -> None:
settings.SCRATCH_DIR.mkdir(parents=True, exist_ok=True)
self._tempdir = Path(tempfile.mkdtemp(prefix="paperless-", dir=settings.SCRATCH_DIR))
self._text: str | None = None
def __enter__(self) -> Self:
return self
def __exit__(self, exc_type, exc_val, exc_tb) -> None:
shutil.rmtree(self._tempdir, ignore_errors=True)
def configure(self, context: ParserContext) -> None:
pass
def parse(self, document_path: Path, mime_type: str, *, produce_archive: bool = True) -> None:
try:
tree = ET.parse(document_path)
self._text = " ".join(tree.getroot().itertext())
except ET.ParseError as e:
raise ParseError(f"XML parse error: {e}") from e
def get_text(self) -> str | None:
return self._text
def get_date(self):
return None
def get_archive_path(self) -> Path | None:
return None
def get_thumbnail(self, document_path: Path, mime_type: str) -> Path:
from PIL import Image, ImageDraw
img = Image.new("RGB", (500, 700), color="white")
ImageDraw.Draw(img).text((10, 10), "XML Document", fill="black")
out = self._tempdir / "thumb.webp"
img.save(out, format="WEBP")
return out
def get_page_count(self, document_path: Path, mime_type: str) -> int | None:
return None
def extract_metadata(self, document_path: Path, mime_type: str) -> list:
return []
```
### Developing date parser plugins
Paperless-ngx uses a plugin system for date parsing, allowing you to extend or replace the default date parsing behavior. Plugins are discovered using [Python entry points](https://setuptools.pypa.io/en/latest/userguide/entry_point.html).
### Creating a Date Parser Plugin
#### Creating a Date Parser Plugin
To create a custom date parser plugin, you need to:
@@ -494,7 +734,7 @@ To create a custom date parser plugin, you need to:
2. Implement the required abstract method
3. Register your plugin via an entry point
#### 1. Implementing the Parser Class
##### 1. Implementing the Parser Class
Your parser must extend `documents.plugins.date_parsing.DateParserPluginBase` and implement the `parse` method:
@@ -534,16 +774,16 @@ class MyDateParserPlugin(DateParserPluginBase):
yield another_datetime
```
#### 2. Configuration and Helper Methods
##### 2. Configuration and Helper Methods
Your parser instance is initialized with a `DateParserConfig` object accessible via `self.config`. This provides:
- `languages: list[str]` - List of language codes for date parsing
- `timezone_str: str` - Timezone string for date localization
- `ignore_dates: set[datetime.date]` - Dates that should be filtered out
- `reference_time: datetime.datetime` - Current time for filtering future dates
- `filename_date_order: str | None` - Date order preference for filenames (e.g., "DMY", "MDY")
- `content_date_order: str` - Date order preference for content
- `languages: list[str]` - List of language codes for date parsing
- `timezone_str: str` - Timezone string for date localization
- `ignore_dates: set[datetime.date]` - Dates that should be filtered out
- `reference_time: datetime.datetime` - Current time for filtering future dates
- `filename_date_order: str | None` - Date order preference for filenames (e.g., "DMY", "MDY")
- `content_date_order: str` - Date order preference for content
The base class provides two helper methods you can use:
@@ -567,11 +807,11 @@ def _filter_date(
"""
```
#### 3. Resource Management (Optional)
##### 3. Resource Management (Optional)
If your plugin needs to acquire or release resources (database connections, API clients, etc.), override the context manager methods. Paperless-ngx will always use plugins as context managers, ensuring resources can be released even in the event of errors.
#### 4. Registering Your Plugin
##### 4. Registering Your Plugin
Register your plugin using a setuptools entry point in your package's `pyproject.toml`:
@@ -582,7 +822,7 @@ my_parser = "my_package.parsers:MyDateParserPlugin"
The entry point name (e.g., `"my_parser"`) is used for sorting when multiple plugins are found. Paperless-ngx will use the first plugin alphabetically by name if multiple plugins are discovered.
### Plugin Discovery
#### Plugin Discovery
Paperless-ngx automatically discovers and loads date parser plugins at runtime. The discovery process:
@@ -593,7 +833,7 @@ Paperless-ngx automatically discovers and loads date parser plugins at runtime.
If multiple plugins are installed, a warning is logged indicating which plugin was selected.
### Example: Simple Date Parser
#### Example: Simple Date Parser
Here's a minimal example that only looks for ISO 8601 dates:
@@ -625,3 +865,30 @@ class ISODateParserPlugin(DateParserPluginBase):
if filtered_date is not None:
yield filtered_date
```
## Using Visual Studio Code devcontainer
Another easy way to get started with development is to use Visual Studio
Code devcontainers. This approach will create a preconfigured development
environment with all of the required tools and dependencies.
[Learn more about devcontainers](https://code.visualstudio.com/docs/devcontainers/containers).
The .devcontainer/vscode/tasks.json and .devcontainer/vscode/launch.json files
contain more information about the specific tasks and launch configurations (see the
non-standard "description" field).
To get started:
1. Clone the repository on your machine and open the Paperless-ngx folder in VS Code.
2. VS Code will prompt you with "Reopen in container". Do so and wait for the environment to start.
3. In case your host operating system is Windows:
- The Source Control view in Visual Studio Code might show: "The detected Git repository is potentially unsafe as the folder is owned by someone other than the current user." Use "Manage Unsafe Repositories" to fix this.
- Git might have detecteded modifications for all files, because Windows is using CRLF line endings. Run `git checkout .` in the containers terminal to fix this issue.
4. Initialize the project by running the task **Project Setup: Run all Init Tasks**. This
will initialize the database tables and create a superuser. Then you can compile the front end
for production or run the frontend in debug mode.
5. The project is ready for debugging, start either run the fullstack debug or individual debug
processes. Yo spin up the project without debugging run the task **Project Start: Run all Services**

View File

@@ -44,28 +44,28 @@ system. On Linux, chances are high that this location is
You can always drag those files out of that folder to use them
elsewhere. Here are a couple notes about that.
- Paperless-ngx never modifies your original documents. It keeps
checksums of all documents and uses a scheduled sanity checker to
check that they remain the same.
- By default, paperless uses the internal ID of each document as its
filename. This might not be very convenient for export. However, you
can adjust the way files are stored in paperless by
[configuring the filename format](advanced_usage.md#file-name-handling).
- [The exporter](administration.md#exporter) is
another easy way to get your files out of paperless with reasonable
file names.
- Paperless-ngx never modifies your original documents. It keeps
checksums of all documents and uses a scheduled sanity checker to
check that they remain the same.
- By default, paperless uses the internal ID of each document as its
filename. This might not be very convenient for export. However, you
can adjust the way files are stored in paperless by
[configuring the filename format](advanced_usage.md#file-name-handling).
- [The exporter](administration.md#exporter) is
another easy way to get your files out of paperless with reasonable
file names.
## _What file types does paperless-ngx support?_
**A:** Currently, the following files are supported:
- PDF documents, PNG images, JPEG images, TIFF images, GIF images and
WebP images are processed with OCR and converted into PDF documents.
- Plain text documents are supported as well and are added verbatim to
paperless.
- With the optional Tika integration enabled (see [Tika configuration](https://docs.paperless-ngx.com/configuration#tika)),
Paperless also supports various Office documents (.docx, .doc, odt,
.ppt, .pptx, .odp, .xls, .xlsx, .ods).
- PDF documents, PNG images, JPEG images, TIFF images, GIF images and
WebP images are processed with OCR and converted into PDF documents.
- Plain text documents are supported as well and are added verbatim to
paperless.
- With the optional Tika integration enabled (see [Tika configuration](https://docs.paperless-ngx.com/configuration#tika)),
Paperless also supports various Office documents (.docx, .doc, odt,
.ppt, .pptx, .odp, .xls, .xlsx, .ods).
Paperless-ngx determines the type of a file by inspecting its content
rather than its file extensions. However, files processed via the

View File

@@ -28,36 +28,36 @@ physical documents into a searchable online archive so you can keep, well, _less
## Features
- **Organize and index** your scanned documents with tags, correspondents, types, and more.
- _Your_ data is stored locally on _your_ server and is never transmitted or shared in any way, unless you explicitly choose to do so.
- Performs **OCR** on your documents, adding searchable and selectable text, even to documents scanned with only images.
- Utilizes the open-source Tesseract engine to recognize more than 100 languages.
- _New!_ Supports remote OCR with Azure AI (opt-in).
- Documents are saved as PDF/A format which is designed for long term storage, alongside the unaltered originals.
- Uses machine-learning to automatically add tags, correspondents and document types to your documents.
- **New**: Paperless-ngx can now leverage AI (Large Language Models or LLMs) for document suggestions. This is an optional feature that can be enabled (and is disabled by default).
- Supports PDF documents, images, plain text files, Office documents (Word, Excel, PowerPoint, and LibreOffice equivalents)[^1] and more.
- Paperless stores your documents plain on disk. Filenames and folders are managed by paperless and their format can be configured freely with different configurations assigned to different documents.
- **Beautiful, modern web application** that features:
- Customizable dashboard with statistics.
- Filtering by tags, correspondents, types, and more.
- Bulk editing of tags, correspondents, types and more.
- Drag-and-drop uploading of documents throughout the app.
- Customizable views can be saved and displayed on the dashboard and / or sidebar.
- Support for custom fields of various data types.
- Shareable public links with optional expiration.
- **Full text search** helps you find what you need:
- Auto completion suggests relevant words from your documents.
- Results are sorted by relevance to your search query.
- Highlighting shows you which parts of the document matched the query.
- Searching for similar documents ("More like this")
- **Email processing**[^1]: import documents from your email accounts:
- Configure multiple accounts and rules for each account.
- After processing, paperless can perform actions on the messages such as marking as read, deleting and more.
- A built-in robust **multi-user permissions** system that supports 'global' permissions as well as per document or object.
- A powerful workflow system that gives you even more control.
- **Optimized** for multi core systems: Paperless-ngx consumes multiple documents in parallel.
- The integrated sanity checker makes sure that your document archive is in good health.
- **Organize and index** your scanned documents with tags, correspondents, types, and more.
- _Your_ data is stored locally on _your_ server and is never transmitted or shared in any way, unless you explicitly choose to do so.
- Performs **OCR** on your documents, adding searchable and selectable text, even to documents scanned with only images.
- Utilizes the open-source Tesseract engine to recognize more than 100 languages.
- _New!_ Supports remote OCR with Azure AI (opt-in).
- Documents are saved as PDF/A format which is designed for long term storage, alongside the unaltered originals.
- Uses machine-learning to automatically add tags, correspondents and document types to your documents.
- **New**: Paperless-ngx can now leverage AI (Large Language Models or LLMs) for document suggestions. This is an optional feature that can be enabled (and is disabled by default).
- Supports PDF documents, images, plain text files, Office documents (Word, Excel, PowerPoint, and LibreOffice equivalents)[^1] and more.
- Paperless stores your documents plain on disk. Filenames and folders are managed by paperless and their format can be configured freely with different configurations assigned to different documents.
- **Beautiful, modern web application** that features:
- Customizable dashboard with statistics.
- Filtering by tags, correspondents, types, and more.
- Bulk editing of tags, correspondents, types and more.
- Drag-and-drop uploading of documents throughout the app.
- Customizable views can be saved and displayed on the dashboard and / or sidebar.
- Support for custom fields of various data types.
- Shareable public links with optional expiration.
- **Full text search** helps you find what you need:
- Auto completion suggests relevant words from your documents.
- Results are sorted by relevance to your search query.
- Highlighting shows you which parts of the document matched the query.
- Searching for similar documents ("More like this")
- **Email processing**[^1]: import documents from your email accounts:
- Configure multiple accounts and rules for each account.
- After processing, paperless can perform actions on the messages such as marking as read, deleting and more.
- A built-in robust **multi-user permissions** system that supports 'global' permissions as well as per document or object.
- A powerful workflow system that gives you even more control.
- **Optimized** for multi core systems: Paperless-ngx consumes multiple documents in parallel.
- The integrated sanity checker makes sure that your document archive is in good health.
[^1]: Office document and email consumption support is optional and provided by Apache Tika (see [configuration](https://docs.paperless-ngx.com/configuration/#tika))

View File

@@ -42,12 +42,12 @@ The `CONSUMER_BARCODE_SCANNER` setting has been removed. zxing-cpp is now the on
### Action Required
- If you were already using `CONSUMER_BARCODE_SCANNER=ZXING`, simply remove the setting.
- If you had `CONSUMER_BARCODE_SCANNER=PYZBAR` or were using the default, no functional changes are needed beyond
removing the setting. zxing-cpp supports all the same barcode formats and you should see improved detection
reliability.
- The `libzbar0` / `libzbar-dev` system packages are no longer required and can be removed from any custom Docker
images or host installations.
- If you were already using `CONSUMER_BARCODE_SCANNER=ZXING`, simply remove the setting.
- If you had `CONSUMER_BARCODE_SCANNER=PYZBAR` or were using the default, no functional changes are needed beyond
removing the setting. zxing-cpp supports all the same barcode formats and you should see improved detection
reliability.
- The `libzbar0` / `libzbar-dev` system packages are no longer required and can be removed from any custom Docker
images or host installations.
## Database Engine
@@ -103,3 +103,30 @@ Multiple options are combined in a single value:
```bash
PAPERLESS_DB_OPTIONS="sslmode=require;sslrootcert=/certs/ca.pem;pool.max_size=10"
```
## OpenID Connect Token Endpoint Authentication
Some existing OpenID Connect setups may require an explicit token endpoint authentication method after upgrading to v3.
#### Action Required
If OIDC login fails at the callback with an `invalid_client` error, add `token_auth_method` to the provider `settings` in
[`PAPERLESS_SOCIALACCOUNT_PROVIDERS`](configuration.md#PAPERLESS_SOCIALACCOUNT_PROVIDERS).
For example:
```json
{
"openid_connect": {
"APPS": [
{
...
"settings": {
"server_url": "https://login.example.com",
"token_auth_method": "client_secret_basic"
}
}
]
}
}
```

View File

@@ -44,8 +44,8 @@ account. In short, it automates the [Docker Compose setup](#docker) described be
#### Prerequisites
- Docker and Docker Compose must be [installed](https://docs.docker.com/engine/install/){:target="\_blank"}.
- macOS users will need [GNU sed](https://formulae.brew.sh/formula/gnu-sed) with support for running as `sed` as well as [wget](https://formulae.brew.sh/formula/wget).
- Docker and Docker Compose must be [installed](https://docs.docker.com/engine/install/){:target="\_blank"}.
- macOS users will need [GNU sed](https://formulae.brew.sh/formula/gnu-sed) with support for running as `sed` as well as [wget](https://formulae.brew.sh/formula/wget).
#### Run the installation script
@@ -63,7 +63,7 @@ credentials you provided during the installation script.
#### Prerequisites
- Docker and Docker Compose must be [installed](https://docs.docker.com/engine/install/){:target="\_blank"}.
- Docker and Docker Compose must be [installed](https://docs.docker.com/engine/install/){:target="\_blank"}.
#### Installation
@@ -101,7 +101,7 @@ credentials you provided during the installation script.
```yaml
ports:
- 8010:8000
- 8010:8000
```
3. Modify `docker-compose.env` with any configuration options you need.
@@ -140,24 +140,17 @@ a [superuser](usage.md#superusers) account.
!!! warning
It is currently not possible to run the container rootless if additional languages are specified via `PAPERLESS_OCR_LANGUAGES`.
It is not possible to run the container rootless if additional languages are specified via `PAPERLESS_OCR_LANGUAGES`.
If you want to run Paperless as a rootless container, make this
change in `docker-compose.yml`:
If you want to run Paperless as a rootless container, set `user:` in `docker-compose.yml` to the UID and GID of your host user (use `id -u` and `id -g` to find these values). The container process starts directly as that user with no internal privilege remapping:
- Set the `user` running the container to map to the `paperless`
user in the container. This value (`user_id` below) should be
the same ID that `USERMAP_UID` and `USERMAP_GID` are set to in
`docker-compose.env`. See `USERMAP_UID` and `USERMAP_GID`
[here](configuration.md#docker).
```yaml
webserver:
image: ghcr.io/paperless-ngx/paperless-ngx:latest
user: '1000:1000'
```
Your entry for Paperless should contain something like:
> ```
> webserver:
> image: ghcr.io/paperless-ngx/paperless-ngx:latest
> user: <user_id>
> ```
Do not combine this with `USERMAP_UID` or `USERMAP_GID`, which are intended for the non-rootless case described in step 3.
**File systems without inotify support (e.g. NFS)**
@@ -171,26 +164,25 @@ to enable polling and disable inotify. See [here](configuration.md#polling).
#### Prerequisites
- Paperless runs on Linux only, Windows is not supported.
- Python 3.11, 3.12, 3.13, or 3.14 is required. As a policy, Paperless-ngx aims to support at least the three most recent Python versions and drops support for versions as they reach end-of-life. Newer versions may work, but some dependencies may not be fully compatible.
- Paperless runs on Linux only, Windows is not supported.
- Python 3.11, 3.12, 3.13, or 3.14 is required. As a policy, Paperless-ngx aims to support at least the three most recent Python versions and drops support for versions as they reach end-of-life. Newer versions may work, but some dependencies may not be fully compatible.
#### Installation
1. Install dependencies. Paperless requires the following packages:
- `python3`
- `python3-pip`
- `python3-dev`
- `default-libmysqlclient-dev` for MariaDB
- `pkg-config` for mysqlclient (python dependency)
- `fonts-liberation` for generating thumbnails for plain text
files
- `imagemagick` >= 6 for PDF conversion
- `gnupg` for handling encrypted documents
- `libpq-dev` for PostgreSQL
- `libmagic-dev` for mime type detection
- `mariadb-client` for MariaDB compile time
- `poppler-utils` for barcode detection
- `python3`
- `python3-pip`
- `python3-dev`
- `default-libmysqlclient-dev` for MariaDB
- `pkg-config` for mysqlclient (python dependency)
- `fonts-liberation` for generating thumbnails for plain text
files
- `imagemagick` >= 6 for PDF conversion
- `gnupg` for handling encrypted documents
- `libpq-dev` for PostgreSQL
- `libmagic-dev` for mime type detection
- `mariadb-client` for MariaDB compile time
- `poppler-utils` for barcode detection
Use this list for your preferred package management:
@@ -200,18 +192,17 @@ to enable polling and disable inotify. See [here](configuration.md#polling).
These dependencies are required for OCRmyPDF, which is used for text
recognition.
- `unpaper`
- `ghostscript`
- `icc-profiles-free`
- `qpdf`
- `liblept5`
- `libxml2`
- `pngquant` (suggested for certain PDF image optimizations)
- `zlib1g`
- `tesseract-ocr` >= 4.0.0 for OCR
- `tesseract-ocr` language packs (`tesseract-ocr-eng`,
`tesseract-ocr-deu`, etc)
- `unpaper`
- `ghostscript`
- `icc-profiles-free`
- `qpdf`
- `liblept5`
- `libxml2`
- `pngquant` (suggested for certain PDF image optimizations)
- `zlib1g`
- `tesseract-ocr` >= 4.0.0 for OCR
- `tesseract-ocr` language packs (`tesseract-ocr-eng`,
`tesseract-ocr-deu`, etc)
Use this list for your preferred package management:
@@ -220,16 +211,14 @@ to enable polling and disable inotify. See [here](configuration.md#polling).
```
On Raspberry Pi, these libraries are required as well:
- `libatlas-base-dev`
- `libxslt1-dev`
- `mime-support`
- `libatlas-base-dev`
- `libxslt1-dev`
- `mime-support`
You will also need these for installing some of the python dependencies:
- `build-essential`
- `python3-setuptools`
- `python3-wheel`
- `build-essential`
- `python3-setuptools`
- `python3-wheel`
Use this list for your preferred package management:
@@ -279,44 +268,41 @@ to enable polling and disable inotify. See [here](configuration.md#polling).
6. Configure Paperless-ngx. See [configuration](configuration.md) for details.
Edit the included `paperless.conf` and adjust the settings to your
needs. Required settings for getting Paperless-ngx running are:
- [`PAPERLESS_REDIS`](configuration.md#PAPERLESS_REDIS) should point to your Redis server, such as
`redis://localhost:6379`.
- [`PAPERLESS_DBENGINE`](configuration.md#PAPERLESS_DBENGINE) is optional, and should be one of `postgres`,
`mariadb`, or `sqlite`
- [`PAPERLESS_DBHOST`](configuration.md#PAPERLESS_DBHOST) should be the hostname on which your
PostgreSQL server is running. Do not configure this to use
SQLite instead. Also configure port, database name, user and
password as necessary.
- [`PAPERLESS_CONSUMPTION_DIR`](configuration.md#PAPERLESS_CONSUMPTION_DIR) should point to the folder
that Paperless-ngx should watch for incoming documents.
Likewise, [`PAPERLESS_DATA_DIR`](configuration.md#PAPERLESS_DATA_DIR) and
[`PAPERLESS_MEDIA_ROOT`](configuration.md#PAPERLESS_MEDIA_ROOT) define where Paperless-ngx stores its data.
If needed, these can point to the same directory.
- [`PAPERLESS_SECRET_KEY`](configuration.md#PAPERLESS_SECRET_KEY) should be a random sequence of
characters. It's used for authentication. Failure to do so
allows third parties to forge authentication credentials.
- Set [`PAPERLESS_URL`](configuration.md#PAPERLESS_URL) if you are behind a reverse proxy. This should
point to your domain. Please see
[configuration](configuration.md) for more
information.
- [`PAPERLESS_REDIS`](configuration.md#PAPERLESS_REDIS) should point to your Redis server, such as
`redis://localhost:6379`.
- [`PAPERLESS_DBENGINE`](configuration.md#PAPERLESS_DBENGINE) is optional, and should be one of `postgres`,
`mariadb`, or `sqlite`
- [`PAPERLESS_DBHOST`](configuration.md#PAPERLESS_DBHOST) should be the hostname on which your
PostgreSQL server is running. Do not configure this to use
SQLite instead. Also configure port, database name, user and
password as necessary.
- [`PAPERLESS_CONSUMPTION_DIR`](configuration.md#PAPERLESS_CONSUMPTION_DIR) should point to the folder
that Paperless-ngx should watch for incoming documents.
Likewise, [`PAPERLESS_DATA_DIR`](configuration.md#PAPERLESS_DATA_DIR) and
[`PAPERLESS_MEDIA_ROOT`](configuration.md#PAPERLESS_MEDIA_ROOT) define where Paperless-ngx stores its data.
If needed, these can point to the same directory.
- [`PAPERLESS_SECRET_KEY`](configuration.md#PAPERLESS_SECRET_KEY) should be a random sequence of
characters. It's used for authentication. Failure to do so
allows third parties to forge authentication credentials.
- Set [`PAPERLESS_URL`](configuration.md#PAPERLESS_URL) if you are behind a reverse proxy. This should
point to your domain. Please see
[configuration](configuration.md) for more
information.
You can make many more adjustments, especially for OCR.
The following options are recommended for most users:
- Set [`PAPERLESS_OCR_LANGUAGE`](configuration.md#PAPERLESS_OCR_LANGUAGE) to the language most of your
documents are written in.
- Set [`PAPERLESS_TIME_ZONE`](configuration.md#PAPERLESS_TIME_ZONE) to your local time zone.
- Set [`PAPERLESS_OCR_LANGUAGE`](configuration.md#PAPERLESS_OCR_LANGUAGE) to the language most of your
documents are written in.
- Set [`PAPERLESS_TIME_ZONE`](configuration.md#PAPERLESS_TIME_ZONE) to your local time zone.
!!! warning
Ensure your Redis instance [is secured](https://redis.io/docs/latest/operate/oss_and_stack/management/security/).
7. Create the following directories if they do not already exist:
- `/opt/paperless/media`
- `/opt/paperless/data`
- `/opt/paperless/consume`
- `/opt/paperless/media`
- `/opt/paperless/data`
- `/opt/paperless/consume`
Adjust these paths if you configured different folders.
Then verify that the `paperless` user has write permissions:
@@ -391,11 +377,10 @@ to enable polling and disable inotify. See [here](configuration.md#polling).
starting point.
Paperless needs:
- The `webserver` script to run the webserver.
- The `consumer` script to watch the input folder.
- The `taskqueue` script for background workers (document consumption, etc.).
- The `scheduler` script for periodic tasks such as email checking.
- The `webserver` script to run the webserver.
- The `consumer` script to watch the input folder.
- The `taskqueue` script for background workers (document consumption, etc.).
- The `scheduler` script for periodic tasks such as email checking.
!!! note
@@ -501,19 +486,19 @@ your setup depending on how you installed Paperless.
This section describes how to update an existing Paperless Docker
installation. Keep these points in mind:
- Read the [changelog](changelog.md) and
take note of breaking changes.
- Decide whether to stay on SQLite or migrate to PostgreSQL.
Both work fine with Paperless-ngx.
However, if you already have a database server running
for other services, you might as well use it for Paperless as well.
- The task scheduler of Paperless, which is used to execute periodic
tasks such as email checking and maintenance, requires a
[Redis](https://redis.io/) message broker instance. The
Docker Compose route takes care of that.
- The layout of the folder structure for your documents and data
remains the same, so you can plug your old Docker volumes into
paperless-ngx and expect it to find everything where it should be.
- Read the [changelog](changelog.md) and
take note of breaking changes.
- Decide whether to stay on SQLite or migrate to PostgreSQL.
Both work fine with Paperless-ngx.
However, if you already have a database server running
for other services, you might as well use it for Paperless as well.
- The task scheduler of Paperless, which is used to execute periodic
tasks such as email checking and maintenance, requires a
[Redis](https://redis.io/) message broker instance. The
Docker Compose route takes care of that.
- The layout of the folder structure for your documents and data
remains the same, so you can plug your old Docker volumes into
paperless-ngx and expect it to find everything where it should be.
Migration to Paperless-ngx is then performed in a few simple steps:
@@ -598,7 +583,6 @@ commands as well.
1. Stop and remove the Paperless container.
2. If using an external database, stop that container.
3. Update Redis configuration.
1. If `REDIS_URL` is already set, change it to [`PAPERLESS_REDIS`](configuration.md#PAPERLESS_REDIS)
and continue to step 4.
@@ -610,22 +594,18 @@ commands as well.
the new Redis container.
4. Update user mapping.
1. If set, change the environment variable `PUID` to `USERMAP_UID`.
1. If set, change the environment variable `PGID` to `USERMAP_GID`.
5. Update configuration paths.
1. Set the environment variable [`PAPERLESS_DATA_DIR`](configuration.md#PAPERLESS_DATA_DIR) to `/config`.
6. Update media paths.
1. Set the environment variable [`PAPERLESS_MEDIA_ROOT`](configuration.md#PAPERLESS_MEDIA_ROOT) to
`/data/media`.
7. Update timezone.
1. Set the environment variable [`PAPERLESS_TIME_ZONE`](configuration.md#PAPERLESS_TIME_ZONE) to the same
value as `TZ`.
@@ -639,33 +619,33 @@ commands as well.
Paperless runs on Raspberry Pi. Some tasks can be slow on lower-powered
hardware, but a few settings can improve performance:
- Stick with SQLite to save some resources. See [troubleshooting](troubleshooting.md#log-reports-creating-paperlesstask-failed)
if you encounter issues with SQLite locking.
- If you do not need the filesystem-based consumer, consider disabling it
entirely by setting [`PAPERLESS_CONSUMER_DISABLE`](configuration.md#PAPERLESS_CONSUMER_DISABLE) to `true`.
- Consider setting [`PAPERLESS_OCR_PAGES`](configuration.md#PAPERLESS_OCR_PAGES) to 1, so that Paperless
OCRs only the first page of your documents. In most cases, this page
contains enough information to be able to find it.
- [`PAPERLESS_TASK_WORKERS`](configuration.md#PAPERLESS_TASK_WORKERS) and [`PAPERLESS_THREADS_PER_WORKER`](configuration.md#PAPERLESS_THREADS_PER_WORKER) are
configured to use all cores. The Raspberry Pi models 3 and up have 4
cores, meaning that Paperless will use 2 workers and 2 threads per
worker. This may result in sluggish response times during
consumption, so you might want to lower these settings (example: 2
workers and 1 thread to always have some computing power left for
other tasks).
- Keep [`PAPERLESS_OCR_MODE`](configuration.md#PAPERLESS_OCR_MODE) at its default value `skip` and consider
OCRing your documents before feeding them into Paperless. Some
scanners are able to do this!
- Set [`PAPERLESS_OCR_SKIP_ARCHIVE_FILE`](configuration.md#PAPERLESS_OCR_SKIP_ARCHIVE_FILE) to `with_text` to skip archive
file generation for already OCRed documents, or `always` to skip it
for all documents.
- If you want to perform OCR on the device, consider using
`PAPERLESS_OCR_CLEAN=none`. This will speed up OCR times and use
less memory at the expense of slightly worse OCR results.
- If using Docker, consider setting [`PAPERLESS_WEBSERVER_WORKERS`](configuration.md#PAPERLESS_WEBSERVER_WORKERS) to 1. This will save some memory.
- Consider setting [`PAPERLESS_ENABLE_NLTK`](configuration.md#PAPERLESS_ENABLE_NLTK) to false, to disable the
more advanced language processing, which can take more memory and
processing time.
- Stick with SQLite to save some resources. See [troubleshooting](troubleshooting.md#log-reports-creating-paperlesstask-failed)
if you encounter issues with SQLite locking.
- If you do not need the filesystem-based consumer, consider disabling it
entirely by setting [`PAPERLESS_CONSUMER_DISABLE`](configuration.md#PAPERLESS_CONSUMER_DISABLE) to `true`.
- Consider setting [`PAPERLESS_OCR_PAGES`](configuration.md#PAPERLESS_OCR_PAGES) to 1, so that Paperless
OCRs only the first page of your documents. In most cases, this page
contains enough information to be able to find it.
- [`PAPERLESS_TASK_WORKERS`](configuration.md#PAPERLESS_TASK_WORKERS) and [`PAPERLESS_THREADS_PER_WORKER`](configuration.md#PAPERLESS_THREADS_PER_WORKER) are
configured to use all cores. The Raspberry Pi models 3 and up have 4
cores, meaning that Paperless will use 2 workers and 2 threads per
worker. This may result in sluggish response times during
consumption, so you might want to lower these settings (example: 2
workers and 1 thread to always have some computing power left for
other tasks).
- Keep [`PAPERLESS_OCR_MODE`](configuration.md#PAPERLESS_OCR_MODE) at its default value `skip` and consider
OCRing your documents before feeding them into Paperless. Some
scanners are able to do this!
- Set [`PAPERLESS_OCR_SKIP_ARCHIVE_FILE`](configuration.md#PAPERLESS_OCR_SKIP_ARCHIVE_FILE) to `with_text` to skip archive
file generation for already OCRed documents, or `always` to skip it
for all documents.
- If you want to perform OCR on the device, consider using
`PAPERLESS_OCR_CLEAN=none`. This will speed up OCR times and use
less memory at the expense of slightly worse OCR results.
- If using Docker, consider setting [`PAPERLESS_WEBSERVER_WORKERS`](configuration.md#PAPERLESS_WEBSERVER_WORKERS) to 1. This will save some memory.
- Consider setting [`PAPERLESS_ENABLE_NLTK`](configuration.md#PAPERLESS_ENABLE_NLTK) to false, to disable the
more advanced language processing, which can take more memory and
processing time.
For details, refer to [configuration](configuration.md).

View File

@@ -4,27 +4,27 @@
Check for the following issues:
- Ensure that the directory you're putting your documents in is the
folder paperless is watching. With docker, this setting is performed
in the `docker-compose.yml` file. Without Docker, look at the
`CONSUMPTION_DIR` setting. Don't adjust this setting if you're
using docker.
- Ensure that the directory you're putting your documents in is the
folder paperless is watching. With docker, this setting is performed
in the `docker-compose.yml` file. Without Docker, look at the
`CONSUMPTION_DIR` setting. Don't adjust this setting if you're
using docker.
- Ensure that redis is up and running. Paperless does its task
processing asynchronously, and for documents to arrive at the task
processor, it needs redis to run.
- Ensure that redis is up and running. Paperless does its task
processing asynchronously, and for documents to arrive at the task
processor, it needs redis to run.
- Ensure that the task processor is running. Docker does this
automatically. Manually invoke the task processor by executing
- Ensure that the task processor is running. Docker does this
automatically. Manually invoke the task processor by executing
```shell-session
celery --app paperless worker
```
```shell-session
celery --app paperless worker
```
- Look at the output of paperless and inspect it for any errors.
- Look at the output of paperless and inspect it for any errors.
- Go to the admin interface, and check if there are failed tasks. If
so, the tasks will contain an error message.
- Go to the admin interface, and check if there are failed tasks. If
so, the tasks will contain an error message.
## Consumer warns `OCR for XX failed`
@@ -78,12 +78,12 @@ Ensure that `chown` is possible on these directories.
This indicates that the Auto matching algorithm found no documents to
learn from. This may have two reasons:
- You don't use the Auto matching algorithm: The error can be safely
ignored in this case.
- You are using the Auto matching algorithm: The classifier explicitly
excludes documents with Inbox tags. Verify that there are documents
in your archive without inbox tags. The algorithm will only learn
from documents not in your inbox.
- You don't use the Auto matching algorithm: The error can be safely
ignored in this case.
- You are using the Auto matching algorithm: The classifier explicitly
excludes documents with Inbox tags. Verify that there are documents
in your archive without inbox tags. The algorithm will only learn
from documents not in your inbox.
## UserWarning in sklearn on every single document
@@ -127,10 +127,10 @@ change in the `docker-compose.yml` file:
# The gotenberg chromium route is used to convert .eml files. We do not
# want to allow external content like tracking pixels or even javascript.
command:
- 'gotenberg'
- '--chromium-disable-javascript=true'
- '--chromium-allow-list=file:///tmp/.*'
- '--api-timeout=60s'
- 'gotenberg'
- '--chromium-disable-javascript=true'
- '--chromium-allow-list=file:///tmp/.*'
- '--api-timeout=60s'
```
## Permission denied errors in the consumption directory

View File

@@ -14,42 +14,42 @@ for finding and managing your documents.
Paperless essentially consists of two different parts for managing your
documents:
- The _consumer_ watches a specified folder and adds all documents in
that folder to paperless.
- The _web server_ (web UI) provides a UI that you use to manage and
search documents.
- The _consumer_ watches a specified folder and adds all documents in
that folder to paperless.
- The _web server_ (web UI) provides a UI that you use to manage and
search documents.
Each document has data fields that you can assign to them:
- A _Document_ is a piece of paper that sometimes contains valuable
information.
- The _correspondent_ of a document is the person, institution or
company that a document either originates from, or is sent to.
- A _tag_ is a label that you can assign to documents. Think of labels
as more powerful folders: Multiple documents can be grouped together
with a single tag, however, a single document can also have multiple
tags. This is not possible with folders. The reason folders are not
implemented in paperless is simply that tags are much more versatile
than folders.
- A _document type_ is used to demarcate the type of a document such
as letter, bank statement, invoice, contract, etc. It is used to
identify what a document is about.
- The document _storage path_ is the location where the document files
are stored. See [Storage Paths](advanced_usage.md#storage-paths) for
more information.
- The _date added_ of a document is the date the document was scanned
into paperless. You cannot and should not change this date.
- The _date created_ of a document is the date the document was
initially issued. This can be the date you bought a product, the
date you signed a contract, or the date a letter was sent to you.
- The _archive serial number_ (short: ASN) of a document is the
identifier of the document in your physical document binders. See
[recommended workflow](#usage-recommended-workflow) below.
- The _content_ of a document is the text that was OCR'ed from the
document. This text is fed into the search engine and is used for
matching tags, correspondents and document types.
- Paperless-ngx also supports _custom fields_ which can be used to
store additional metadata about a document.
- A _Document_ is a piece of paper that sometimes contains valuable
information.
- The _correspondent_ of a document is the person, institution or
company that a document either originates from, or is sent to.
- A _tag_ is a label that you can assign to documents. Think of labels
as more powerful folders: Multiple documents can be grouped together
with a single tag, however, a single document can also have multiple
tags. This is not possible with folders. The reason folders are not
implemented in paperless is simply that tags are much more versatile
than folders.
- A _document type_ is used to demarcate the type of a document such
as letter, bank statement, invoice, contract, etc. It is used to
identify what a document is about.
- The document _storage path_ is the location where the document files
are stored. See [Storage Paths](advanced_usage.md#storage-paths) for
more information.
- The _date added_ of a document is the date the document was scanned
into paperless. You cannot and should not change this date.
- The _date created_ of a document is the date the document was
initially issued. This can be the date you bought a product, the
date you signed a contract, or the date a letter was sent to you.
- The _archive serial number_ (short: ASN) of a document is the
identifier of the document in your physical document binders. See
[recommended workflow](#usage-recommended-workflow) below.
- The _content_ of a document is the text that was OCR'ed from the
document. This text is fed into the search engine and is used for
matching tags, correspondents and document types.
- Paperless-ngx also supports _custom fields_ which can be used to
store additional metadata about a document.
## The Web UI
@@ -93,12 +93,12 @@ download the document or share it via a share link.
Think of versions as **file history** for a document.
- Versions track the underlying file and extracted text content (OCR/text).
- Metadata such as tags, correspondent, document type, storage path and custom fields stay on the "root" document.
- Version files follow normal filename formatting (including storage paths) and add a `_vN` suffix (for example `_v1`, `_v2`).
- By default, search and document content use the latest version.
- In document detail, selecting a version switches the preview, file metadata and content (and download etc buttons) to that version.
- Deleting a non-root version keeps metadata and falls back to the latest remaining version.
- Versions track the underlying file and extracted text content (OCR/text).
- Metadata such as tags, correspondent, document type, storage path and custom fields stay on the "root" document.
- Version files follow normal filename formatting (including storage paths) and add a `_vN` suffix (for example `_v1`, `_v2`).
- By default, search and document content use the latest version.
- In document detail, selecting a version switches the preview, file metadata and content (and download etc buttons) to that version.
- Deleting a non-root version keeps metadata and falls back to the latest remaining version.
### Management Lists
@@ -218,21 +218,20 @@ patterns can include wildcards and multiple patterns separated by a comma.
The actions all ensure that the same mail is not consumed twice by
different means. These are as follows:
- **Delete:** Immediately deletes mail that paperless has consumed
documents from. Use with caution.
- **Mark as read:** Mark consumed mail as read. Paperless will not
consume documents from already read mails. If you read a mail before
paperless sees it, it will be ignored.
- **Flag:** Sets the 'important' flag on mails with consumed
documents. Paperless will not consume flagged mails.
- **Move to folder:** Moves consumed mails out of the way so that
paperless won't consume them again.
- **Add custom Tag:** Adds a custom tag to mails with consumed
documents (the IMAP standard calls these "keywords"). Paperless
will not consume mails already tagged. Not all mail servers support
this feature!
- **Apple Mail support:** Apple Mail clients allow differently colored tags. For this to work use `apple:<color>` (e.g. _apple:green_) as a custom tag. Available colors are _red_, _orange_, _yellow_, _blue_, _green_, _violet_ and _grey_.
- **Delete:** Immediately deletes mail that paperless has consumed
documents from. Use with caution.
- **Mark as read:** Mark consumed mail as read. Paperless will not
consume documents from already read mails. If you read a mail before
paperless sees it, it will be ignored.
- **Flag:** Sets the 'important' flag on mails with consumed
documents. Paperless will not consume flagged mails.
- **Move to folder:** Moves consumed mails out of the way so that
paperless won't consume them again.
- **Add custom Tag:** Adds a custom tag to mails with consumed
documents (the IMAP standard calls these "keywords"). Paperless
will not consume mails already tagged. Not all mail servers support
this feature!
- **Apple Mail support:** Apple Mail clients allow differently colored tags. For this to work use `apple:<color>` (e.g. _apple:green_) as a custom tag. Available colors are _red_, _orange_, _yellow_, _blue_, _green_, _violet_ and _grey_.
!!! warning
@@ -325,12 +324,12 @@ or using [email](#workflow-action-email) or [webhook](#workflow-action-webhook)
"Share links" are public links to files (or an archive of files) and can be created and managed under the 'Send' button on the document detail screen or from the bulk editor.
- Share links do not require a user to login and thus link directly to a file or bundled download.
- Links are unique and are of the form `{paperless-url}/share/{randomly-generated-slug}`.
- Links can optionally have an expiration time set.
- After a link expires or is deleted users will be redirected to the regular paperless-ngx login.
- From the document detail screen you can create a share link for that single document.
- From the bulk editor you can create a **share link bundle** for any selection. Paperless-ngx prepares a ZIP archive in the background and exposes a single share link. You can revisit the "Manage share link bundles" dialog to monitor progress, retry failed bundles, or delete links.
- Share links do not require a user to login and thus link directly to a file or bundled download.
- Links are unique and are of the form `{paperless-url}/share/{randomly-generated-slug}`.
- Links can optionally have an expiration time set.
- After a link expires or is deleted users will be redirected to the regular paperless-ngx login.
- From the document detail screen you can create a share link for that single document.
- From the bulk editor you can create a **share link bundle** for any selection. Paperless-ngx prepares a ZIP archive in the background and exposes a single share link. You can revisit the "Manage share link bundles" dialog to monitor progress, retry failed bundles, or delete links.
!!! tip
@@ -514,25 +513,25 @@ flowchart TD
Workflows allow you to filter by:
- Source, e.g. documents uploaded via consume folder, API (& the web UI) and mail fetch
- File name, including wildcards e.g. \*.pdf will apply to all pdfs.
- File path, including wildcards. Note that enabling `PAPERLESS_CONSUMER_RECURSIVE` would allow, for
example, automatically assigning documents to different owners based on the upload directory.
- Mail rule. Choosing this option will force 'mail fetch' to be the workflow source.
- Content matching (`Added`, `Updated` and `Scheduled` triggers only). Filter document content using the matching settings.
- Source, e.g. documents uploaded via consume folder, API (& the web UI) and mail fetch
- File name, including wildcards e.g. \*.pdf will apply to all pdfs.
- File path, including wildcards. Note that enabling `PAPERLESS_CONSUMER_RECURSIVE` would allow, for
example, automatically assigning documents to different owners based on the upload directory.
- Mail rule. Choosing this option will force 'mail fetch' to be the workflow source.
- Content matching (`Added`, `Updated` and `Scheduled` triggers only). Filter document content using the matching settings.
There are also 'advanced' filters available for `Added`, `Updated` and `Scheduled` triggers:
- Any Tags: Filter for documents with any of the specified tags.
- All Tags: Filter for documents with all of the specified tags.
- No Tags: Filter for documents with none of the specified tags.
- Document type: Filter documents with this document type.
- Not Document types: Filter documents without any of these document types.
- Correspondent: Filter documents with this correspondent.
- Not Correspondents: Filter documents without any of these correspondents.
- Storage path: Filter documents with this storage path.
- Not Storage paths: Filter documents without any of these storage paths.
- Custom field query: Filter documents with a custom field query (the same as used for the document list filters).
- Any Tags: Filter for documents with any of the specified tags.
- All Tags: Filter for documents with all of the specified tags.
- No Tags: Filter for documents with none of the specified tags.
- Document type: Filter documents with this document type.
- Not Document types: Filter documents without any of these document types.
- Correspondent: Filter documents with this correspondent.
- Not Correspondents: Filter documents without any of these correspondents.
- Storage path: Filter documents with this storage path.
- Not Storage paths: Filter documents without any of these storage paths.
- Custom field query: Filter documents with a custom field query (the same as used for the document list filters).
### Workflow Actions
@@ -544,37 +543,37 @@ The following workflow action types are available:
"Assignment" actions can assign:
- Title, see [workflow placeholders](usage.md#workflow-placeholders) below
- Tags, correspondent, document type and storage path
- Document owner
- View and / or edit permissions to users or groups
- Custom fields. Note that no value for the field will be set
- Title, see [workflow placeholders](usage.md#workflow-placeholders) below
- Tags, correspondent, document type and storage path
- Document owner
- View and / or edit permissions to users or groups
- Custom fields. Note that no value for the field will be set
##### Removal {#workflow-action-removal}
"Removal" actions can remove either all of or specific sets of the following:
- Tags, correspondents, document types or storage paths
- Document owner
- View and / or edit permissions
- Custom fields
- Tags, correspondents, document types or storage paths
- Document owner
- View and / or edit permissions
- Custom fields
##### Email {#workflow-action-email}
"Email" actions can send documents via email. This action requires a mail server to be [configured](configuration.md#email-sending). You can specify:
- The recipient email address(es) separated by commas
- The subject and body of the email, which can include placeholders, see [placeholders](usage.md#workflow-placeholders) below
- Whether to include the document as an attachment
- The recipient email address(es) separated by commas
- The subject and body of the email, which can include placeholders, see [placeholders](usage.md#workflow-placeholders) below
- Whether to include the document as an attachment
##### Webhook {#workflow-action-webhook}
"Webhook" actions send a POST request to a specified URL. You can specify:
- The URL to send the request to
- The request body as text or as key-value pairs, which can include placeholders, see [placeholders](usage.md#workflow-placeholders) below.
- Encoding for the request body, either JSON or form data
- The request headers as key-value pairs
- The URL to send the request to
- The request body as text or as key-value pairs, which can include placeholders, see [placeholders](usage.md#workflow-placeholders) below.
- Encoding for the request body, either JSON or form data
- The request headers as key-value pairs
For security reasons, webhooks can be limited to specific ports and disallowed from connecting to local URLs. See the relevant
[configuration settings](configuration.md#workflow-webhooks) to change this behavior. If you are allowing non-admins to create workflows,
@@ -605,33 +604,33 @@ The available inputs differ depending on the type of workflow trigger.
This is because at the time of consumption (when the text is to be set), no automatic tags etc. have been
applied. You can use the following placeholders in the template with any trigger type:
- `{{correspondent}}`: assigned correspondent name
- `{{document_type}}`: assigned document type name
- `{{owner_username}}`: assigned owner username
- `{{added}}`: added datetime
- `{{added_year}}`: added year
- `{{added_year_short}}`: added year
- `{{added_month}}`: added month
- `{{added_month_name}}`: added month name
- `{{added_month_name_short}}`: added month short name
- `{{added_day}}`: added day
- `{{added_time}}`: added time in HH:MM format
- `{{original_filename}}`: original file name without extension
- `{{filename}}`: current file name without extension (for "added" workflows this may not be final yet, you can use `{{original_filename}}`)
- `{{doc_title}}`: current document title (cannot be used in title assignment)
- `{{correspondent}}`: assigned correspondent name
- `{{document_type}}`: assigned document type name
- `{{owner_username}}`: assigned owner username
- `{{added}}`: added datetime
- `{{added_year}}`: added year
- `{{added_year_short}}`: added year
- `{{added_month}}`: added month
- `{{added_month_name}}`: added month name
- `{{added_month_name_short}}`: added month short name
- `{{added_day}}`: added day
- `{{added_time}}`: added time in HH:MM format
- `{{original_filename}}`: original file name without extension
- `{{filename}}`: current file name without extension (for "added" workflows this may not be final yet, you can use `{{original_filename}}`)
- `{{doc_title}}`: current document title (cannot be used in title assignment)
The following placeholders are only available for "added" or "updated" triggers
- `{{created}}`: created datetime
- `{{created_year}}`: created year
- `{{created_year_short}}`: created year
- `{{created_month}}`: created month
- `{{created_month_name}}`: created month name
- `{{created_month_name_short}}`: created month short name
- `{{created_day}}`: created day
- `{{created_time}}`: created time in HH:MM format
- `{{doc_url}}`: URL to the document in the web UI. Requires the `PAPERLESS_URL` setting to be set.
- `{{doc_id}}`: Document ID
- `{{created}}`: created datetime
- `{{created_year}}`: created year
- `{{created_year_short}}`: created year
- `{{created_month}}`: created month
- `{{created_month_name}}`: created month name
- `{{created_month_name_short}}`: created month short name
- `{{created_day}}`: created day
- `{{created_time}}`: created time in HH:MM format
- `{{doc_url}}`: URL to the document in the web UI. Requires the `PAPERLESS_URL` setting to be set.
- `{{doc_id}}`: Document ID
##### Examples
@@ -676,26 +675,26 @@ Multiple fields may be attached to a document but the same field name cannot be
The following custom field types are supported:
- `Text`: any text
- `Boolean`: true / false (check / unchecked) field
- `Date`: date
- `URL`: a valid url
- `Integer`: integer number e.g. 12
- `Number`: float number e.g. 12.3456
- `Monetary`: [ISO 4217 currency code](https://en.wikipedia.org/wiki/ISO_4217#List_of_ISO_4217_currency_codes) and a number with exactly two decimals, e.g. USD12.30
- `Document Link`: reference(s) to other document(s) displayed as links, automatically creates a symmetrical link in reverse
- `Select`: a pre-defined list of strings from which the user can choose
- `Text`: any text
- `Boolean`: true / false (check / unchecked) field
- `Date`: date
- `URL`: a valid url
- `Integer`: integer number e.g. 12
- `Number`: float number e.g. 12.3456
- `Monetary`: [ISO 4217 currency code](https://en.wikipedia.org/wiki/ISO_4217#List_of_ISO_4217_currency_codes) and a number with exactly two decimals, e.g. USD12.30
- `Document Link`: reference(s) to other document(s) displayed as links, automatically creates a symmetrical link in reverse
- `Select`: a pre-defined list of strings from which the user can choose
## PDF Actions
Paperless-ngx supports basic editing operations for PDFs (these operations currently cannot be performed on non-PDF files). When viewing an individual document you can
open the 'PDF Editor' to use a simple UI for re-arranging, rotating, deleting pages and splitting documents.
- Merging documents: available when selecting multiple documents for 'bulk editing'.
- Rotating documents: available when selecting multiple documents for 'bulk editing' and via the pdf editor on an individual document's details page.
- Splitting documents: via the pdf editor on an individual document's details page.
- Deleting pages: via the pdf editor on an individual document's details page.
- Re-arranging pages: via the pdf editor on an individual document's details page.
- Merging documents: available when selecting multiple documents for 'bulk editing'.
- Rotating documents: available when selecting multiple documents for 'bulk editing' and via the pdf editor on an individual document's details page.
- Splitting documents: via the pdf editor on an individual document's details page.
- Deleting pages: via the pdf editor on an individual document's details page.
- Re-arranging pages: via the pdf editor on an individual document's details page.
!!! important
@@ -773,18 +772,18 @@ the system.
Here are a couple examples of tags and types that you could use in your
collection.
- An `inbox` tag for newly added documents that you haven't manually
edited yet.
- A tag `car` for everything car related (repairs, registration,
insurance, etc)
- A tag `todo` for documents that you still need to do something with,
such as reply, or perform some task online.
- A tag `bank account x` for all bank statement related to that
account.
- A tag `mail` for anything that you added to paperless via its mail
processing capabilities.
- A tag `missing_metadata` when you still need to add some metadata to
a document, but can't or don't want to do this right now.
- An `inbox` tag for newly added documents that you haven't manually
edited yet.
- A tag `car` for everything car related (repairs, registration,
insurance, etc)
- A tag `todo` for documents that you still need to do something with,
such as reply, or perform some task online.
- A tag `bank account x` for all bank statement related to that
account.
- A tag `mail` for anything that you added to paperless via its mail
processing capabilities.
- A tag `missing_metadata` when you still need to add some metadata to
a document, but can't or don't want to do this right now.
## Searching {#basic-usage_searching}
@@ -873,8 +872,8 @@ The following diagram shows how easy it is to manage your documents.
### Preparations in paperless
- Create an inbox tag that gets assigned to all new documents.
- Create a TODO tag.
- Create an inbox tag that gets assigned to all new documents.
- Create a TODO tag.
### Processing of the physical documents
@@ -948,15 +947,15 @@ Some documents require attention and require you to act on the document.
You may take two different approaches to handle these documents based on
how regularly you intend to scan documents and use paperless.
- If you scan and process your documents in paperless regularly,
assign a TODO tag to all scanned documents that you need to process.
Create a saved view on the dashboard that shows all documents with
this tag.
- If you do not scan documents regularly and use paperless solely for
archiving, create a physical todo box next to your physical inbox
and put documents you need to process in the TODO box. When you
performed the task associated with the document, move it to the
inbox.
- If you scan and process your documents in paperless regularly,
assign a TODO tag to all scanned documents that you need to process.
Create a saved view on the dashboard that shows all documents with
this tag.
- If you do not scan documents regularly and use paperless solely for
archiving, create a physical todo box next to your physical inbox
and put documents you need to process in the TODO box. When you
performed the task associated with the document, move it to the
inbox.
## Remote OCR
@@ -977,64 +976,63 @@ or page limitations (e.g. with a free tier).
Paperless-ngx consists of the following components:
- **The webserver:** This serves the administration pages, the API,
and the new frontend. This is the main tool you'll be using to interact
with paperless. You may start the webserver directly with
- **The webserver:** This serves the administration pages, the API,
and the new frontend. This is the main tool you'll be using to interact
with paperless. You may start the webserver directly with
```shell-session
cd /path/to/paperless/src/
granian --interface asginl --ws "paperless.asgi:application"
```
```shell-session
cd /path/to/paperless/src/
granian --interface asginl --ws "paperless.asgi:application"
```
or by any other means such as Apache `mod_wsgi`.
or by any other means such as Apache `mod_wsgi`.
- **The consumer:** This is what watches your consumption folder for
documents. However, the consumer itself does not really consume your
documents. Now it notifies a task processor that a new file is ready
for consumption. I suppose it should be named differently. This was
also used to check your emails, but that's now done elsewhere as
well.
- **The consumer:** This is what watches your consumption folder for
documents. However, the consumer itself does not really consume your
documents. Now it notifies a task processor that a new file is ready
for consumption. I suppose it should be named differently. This was
also used to check your emails, but that's now done elsewhere as
well.
Start the consumer with the management command `document_consumer`:
Start the consumer with the management command `document_consumer`:
```shell-session
cd /path/to/paperless/src/
python3 manage.py document_consumer
```
```shell-session
cd /path/to/paperless/src/
python3 manage.py document_consumer
```
- **The task processor:** Paperless relies on [Celery - Distributed
Task Queue](https://docs.celeryq.dev/en/stable/index.html) for doing
most of the heavy lifting. This is a task queue that accepts tasks
from multiple sources and processes these in parallel. It also comes
with a scheduler that executes certain commands periodically.
- **The task processor:** Paperless relies on [Celery - Distributed
Task Queue](https://docs.celeryq.dev/en/stable/index.html) for doing
most of the heavy lifting. This is a task queue that accepts tasks
from multiple sources and processes these in parallel. It also comes
with a scheduler that executes certain commands periodically.
This task processor is responsible for:
This task processor is responsible for:
- Consuming documents. When the consumer finds new documents, it
notifies the task processor to start a consumption task.
- The task processor also performs the consumption of any
documents you upload through the web interface.
- Consuming emails. It periodically checks your configured
accounts for new emails and notifies the task processor to
consume the attachment of an email.
- Maintaining the search index and the automatic matching
algorithm. These are things that paperless needs to do from time
to time in order to operate properly.
- Consuming documents. When the consumer finds new documents, it
notifies the task processor to start a consumption task.
- The task processor also performs the consumption of any
documents you upload through the web interface.
- Consuming emails. It periodically checks your configured
accounts for new emails and notifies the task processor to
consume the attachment of an email.
- Maintaining the search index and the automatic matching
algorithm. These are things that paperless needs to do from time
to time in order to operate properly.
This allows paperless to process multiple documents from your
consumption folder in parallel! On a modern multi core system, this
makes the consumption process with full OCR blazingly fast.
This allows paperless to process multiple documents from your
consumption folder in parallel! On a modern multi core system, this
makes the consumption process with full OCR blazingly fast.
The task processor comes with a built-in admin interface that you
can use to check whenever any of the tasks fail and inspect the
errors (i.e., wrong email credentials, errors during consuming a
specific file, etc).
The task processor comes with a built-in admin interface that you
can use to check whenever any of the tasks fail and inspect the
errors (i.e., wrong email credentials, errors during consuming a
specific file, etc).
- A [redis](https://redis.io/) message broker: This is a really
lightweight service that is responsible for getting the tasks from
the webserver and the consumer to the task scheduler. These run in a
different process (maybe even on different machines!), and
therefore, this is necessary.
- A [redis](https://redis.io/) message broker: This is a really
lightweight service that is responsible for getting the tasks from
the webserver and the consumer to the task scheduler. These run in a
different process (maybe even on different machines!), and
therefore, this is necessary.
- Optional: A database server. Paperless supports PostgreSQL, MariaDB
and SQLite for storing its data.
- Optional: A database server. Paperless supports PostgreSQL, MariaDB
and SQLite for storing its data.

View File

@@ -1,6 +1,6 @@
[project]
name = "paperless-ngx"
version = "2.20.10"
version = "2.20.13"
description = "A community-supported supercharged document management system: scan, index and archive all your physical documents"
readme = "README.md"
requires-python = ">=3.11"
@@ -26,7 +26,7 @@ dependencies = [
# WARNING: django does not use semver.
# Only patch versions are guaranteed to not introduce breaking changes.
"django~=5.2.10",
"django-allauth[mfa,socialaccount]~=65.14.0",
"django-allauth[mfa,socialaccount]~=65.15.0",
"django-auditlog~=3.4.1",
"django-cachalot~=2.9.0",
"django-celery-results~=2.6.0",
@@ -42,13 +42,14 @@ dependencies = [
"djangorestframework~=3.16",
"djangorestframework-guardian~=0.4.0",
"drf-spectacular~=0.28",
"drf-spectacular-sidecar~=2026.1.1",
"drf-spectacular-sidecar~=2026.3.1",
"drf-writable-nested~=0.7.1",
"faiss-cpu>=1.10",
"filelock~=3.24.3",
"filelock~=3.25.2",
"flower~=2.0.1",
"gotenberg-client~=0.13.1",
"httpx-oauth~=0.16",
"ijson>=3.2",
"imap-tools~=1.11.0",
"jinja2~=3.1.5",
"langdetect~=1.0.9",
@@ -59,7 +60,7 @@ dependencies = [
"llama-index-llms-openai>=0.6.13",
"llama-index-vector-stores-faiss>=0.5.2",
"nltk~=3.9.1",
"ocrmypdf~=16.13.0",
"ocrmypdf~=17.3.0",
"openai>=1.76",
"pathvalidate~=3.3.1",
"pdf2image~=1.17.0",
@@ -71,7 +72,7 @@ dependencies = [
"rapidfuzz~=3.14.0",
"redis[hiredis]~=5.2.1",
"regex>=2025.9.18",
"scikit-learn~=1.7.0",
"scikit-learn~=1.8.0",
"sentence-transformers>=4.1",
"setproctitle~=1.3.4",
"tika-client~=0.10.0",
@@ -110,7 +111,7 @@ docs = [
testing = [
"daphne",
"factory-boy~=3.3.1",
"faker~=40.5.1",
"faker~=40.8.0",
"imagehash",
"pytest~=9.0.0",
"pytest-cov~=7.0.0",
@@ -247,15 +248,13 @@ lint.per-file-ignores."docker/wait-for-redis.py" = [
lint.per-file-ignores."src/documents/models.py" = [
"SIM115",
]
lint.per-file-ignores."src/paperless_tesseract/tests/test_parser.py" = [
"RUF001",
]
lint.isort.force-single-line = true
[tool.codespell]
write-changes = true
ignore-words-list = "criterias,afterall,valeu,ureue,equest,ure,assertIn,Oktober,commitish"
skip = "src-ui/src/locale/*,src-ui/pnpm-lock.yaml,src-ui/e2e/*,src/paperless_mail/tests/samples/*,src/documents/tests/samples/*,*.po,*.json"
skip = "src-ui/src/locale/*,src-ui/pnpm-lock.yaml,src-ui/e2e/*,src/paperless_mail/tests/samples/*,src/paperless/tests/samples/mail/*,src/documents/tests/samples/*,*.po,*.json"
[tool.pytest]
minversion = "9.0"
@@ -270,10 +269,6 @@ testpaths = [
"src/documents/tests/",
"src/paperless/tests/",
"src/paperless_mail/tests/",
"src/paperless_tesseract/tests/",
"src/paperless_tika/tests",
"src/paperless_text/tests/",
"src/paperless_remote/tests/",
"src/paperless_ai/tests",
]

View File

@@ -19,6 +19,4 @@ following additional information about it:
* Correspondent: ${DOCUMENT_CORRESPONDENT}
* Tags: ${DOCUMENT_TAGS}
It was consumed with the passphrase ${PASSPHRASE}
"

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
{
"name": "paperless-ngx-ui",
"version": "2.20.10",
"version": "2.20.13",
"scripts": {
"preinstall": "npx only-allow pnpm",
"ng": "ng",
@@ -11,17 +11,17 @@
},
"private": true,
"dependencies": {
"@angular/cdk": "^21.2.0",
"@angular/common": "~21.2.0",
"@angular/compiler": "~21.2.0",
"@angular/core": "~21.2.0",
"@angular/forms": "~21.2.0",
"@angular/localize": "~21.2.0",
"@angular/platform-browser": "~21.2.0",
"@angular/platform-browser-dynamic": "~21.2.0",
"@angular/router": "~21.2.0",
"@angular/cdk": "^21.2.2",
"@angular/common": "~21.2.4",
"@angular/compiler": "~21.2.4",
"@angular/core": "~21.2.4",
"@angular/forms": "~21.2.4",
"@angular/localize": "~21.2.4",
"@angular/platform-browser": "~21.2.4",
"@angular/platform-browser-dynamic": "~21.2.4",
"@angular/router": "~21.2.4",
"@ng-bootstrap/ng-bootstrap": "^20.0.0",
"@ng-select/ng-select": "^21.4.1",
"@ng-select/ng-select": "^21.5.2",
"@ngneat/dirty-check-forms": "^3.0.3",
"@popperjs/core": "^2.11.8",
"bootstrap": "^5.3.8",
@@ -42,26 +42,26 @@
"devDependencies": {
"@angular-builders/custom-webpack": "^21.0.3",
"@angular-builders/jest": "^21.0.3",
"@angular-devkit/core": "^21.2.0",
"@angular-devkit/schematics": "^21.2.0",
"@angular-devkit/core": "^21.2.2",
"@angular-devkit/schematics": "^21.2.2",
"@angular-eslint/builder": "21.3.0",
"@angular-eslint/eslint-plugin": "21.3.0",
"@angular-eslint/eslint-plugin-template": "21.3.0",
"@angular-eslint/schematics": "21.3.0",
"@angular-eslint/template-parser": "21.3.0",
"@angular/build": "^21.2.0",
"@angular/cli": "~21.2.0",
"@angular/compiler-cli": "~21.2.0",
"@angular/build": "^21.2.2",
"@angular/cli": "~21.2.2",
"@angular/compiler-cli": "~21.2.4",
"@codecov/webpack-plugin": "^1.9.1",
"@playwright/test": "^1.58.2",
"@types/jest": "^30.0.0",
"@types/node": "^25.3.3",
"@typescript-eslint/eslint-plugin": "^8.54.0",
"@typescript-eslint/parser": "^8.54.0",
"@typescript-eslint/utils": "^8.54.0",
"eslint": "^10.0.2",
"jest": "30.2.0",
"jest-environment-jsdom": "^30.2.0",
"@types/node": "^25.4.0",
"@typescript-eslint/eslint-plugin": "^8.57.0",
"@typescript-eslint/parser": "^8.57.0",
"@typescript-eslint/utils": "^8.57.0",
"eslint": "^10.0.3",
"jest": "30.3.0",
"jest-environment-jsdom": "^30.3.0",
"jest-junit": "^16.0.0",
"jest-preset-angular": "^16.1.1",
"jest-websocket-mock": "^2.5.0",

1858
src-ui/pnpm-lock.yaml generated

File diff suppressed because it is too large Load Diff

View File

@@ -59,7 +59,7 @@
<div [ngbNavOutlet]="nav" class="border-start border-end border-bottom p-3 mb-3 shadow-sm"></div>
<div class="btn-toolbar" role="toolbar">
<div class="btn-group me-2">
<button type="button" (click)="discardChanges()" class="btn btn-outline-secondary" [disabled]="loading || (isDirty$ | async) === false" i18n>Discard</button>
<button type="button" (click)="discardChanges()" class="btn btn-outline-secondary" [disabled]="loading || (isDirty$ | async) === false" i18n>Cancel</button>
</div>
<div class="btn-group">
<button type="submit" class="btn btn-primary" [disabled]="loading || !configForm.valid || (isDirty$ | async) === false" i18n>Save</button>

View File

@@ -1,7 +1,7 @@
<nav class="navbar navbar-dark fixed-top bg-primary flex-md-nowrap p-0 shadow-sm">
<button class="navbar-toggler d-md-none collapsed border-0" type="button" data-toggle="collapse"
data-target="#sidebarMenu" aria-controls="sidebarMenu" aria-expanded="false" aria-label="Toggle navigation"
(click)="isMenuCollapsed = !isMenuCollapsed">
(click)="mobileSearchHidden = false; isMenuCollapsed = !isMenuCollapsed">
<span class="navbar-toggler-icon"></span>
</button>
<a class="navbar-brand d-flex align-items-center me-0 px-3 py-3 order-sm-0"
@@ -24,7 +24,8 @@
}
</div>
</a>
<div class="search-container flex-grow-1 py-2 pb-3 pb-sm-2 px-3 ps-md-4 me-sm-auto order-3 order-sm-1">
<div class="search-container flex-grow-1 py-2 pb-3 pb-sm-2 px-3 ps-md-4 me-sm-auto order-3 order-sm-1"
[class.mobile-hidden]="mobileSearchHidden">
<div class="col-12 col-md-7">
<pngx-global-search></pngx-global-search>
</div>
@@ -378,7 +379,7 @@
</div>
</nav>
<main role="main" class="ms-sm-auto px-md-4"
<main role="main" class="ms-sm-auto px-md-4" [class.mobile-search-hidden]="mobileSearchHidden"
[ngClass]="slimSidebarEnabled ? 'col-slim' : 'col-md-9 col-lg-10 col-xxxl-11'">
<router-outlet></router-outlet>
</main>

View File

@@ -44,6 +44,23 @@
.sidebar {
top: 3.5rem;
}
.search-container {
max-height: 4.5rem;
overflow: hidden;
transition: max-height .2s ease, opacity .2s ease, padding-top .2s ease, padding-bottom .2s ease;
&.mobile-hidden {
max-height: 0;
opacity: 0;
padding-top: 0 !important;
padding-bottom: 0 !important;
}
}
main.mobile-search-hidden {
padding-top: 56px;
}
}
main {

View File

@@ -293,6 +293,59 @@ describe('AppFrameComponent', () => {
expect(component.isMenuCollapsed).toBeTruthy()
})
it('should hide mobile search when scrolling down and show it when scrolling up', () => {
Object.defineProperty(globalThis, 'innerWidth', {
value: 767,
})
component.ngOnInit()
Object.defineProperty(globalThis, 'scrollY', {
configurable: true,
value: 40,
})
component.onWindowScroll()
expect(component.mobileSearchHidden).toBe(true)
Object.defineProperty(globalThis, 'scrollY', {
configurable: true,
value: 0,
})
component.onWindowScroll()
expect(component.mobileSearchHidden).toBe(false)
})
it('should keep mobile search visible on desktop scroll or resize', () => {
Object.defineProperty(globalThis, 'innerWidth', {
value: 1024,
})
component.ngOnInit()
component.mobileSearchHidden = true
component.onWindowScroll()
expect(component.mobileSearchHidden).toBe(false)
component.mobileSearchHidden = true
component.onWindowResize()
})
it('should keep mobile search visible while the mobile menu is expanded', () => {
Object.defineProperty(globalThis, 'innerWidth', {
value: 767,
})
component.ngOnInit()
component.isMenuCollapsed = false
Object.defineProperty(globalThis, 'scrollY', {
configurable: true,
value: 40,
})
component.onWindowScroll()
expect(component.mobileSearchHidden).toBe(false)
})
it('should support close document & navigate on close current doc', () => {
const closeSpy = jest.spyOn(openDocumentsService, 'closeDocument')
closeSpy.mockReturnValue(of(true))

View File

@@ -51,6 +51,8 @@ import { ComponentWithPermissions } from '../with-permissions/with-permissions.c
import { GlobalSearchComponent } from './global-search/global-search.component'
import { ToastsDropdownComponent } from './toasts-dropdown/toasts-dropdown.component'
const SCROLL_THRESHOLD = 16
@Component({
selector: 'pngx-app-frame',
templateUrl: './app-frame.component.html',
@@ -94,6 +96,10 @@ export class AppFrameComponent
slimSidebarAnimating: boolean = false
public mobileSearchHidden: boolean = false
private lastScrollY: number = 0
constructor() {
super()
const permissionsService = this.permissionsService
@@ -111,6 +117,8 @@ export class AppFrameComponent
}
ngOnInit(): void {
this.lastScrollY = window.scrollY
if (this.settingsService.get(SETTINGS_KEYS.UPDATE_CHECKING_ENABLED)) {
this.checkForUpdates()
}
@@ -263,6 +271,38 @@ export class AppFrameComponent
return this.settingsService.get(SETTINGS_KEYS.AI_ENABLED)
}
@HostListener('window:resize')
onWindowResize(): void {
if (!this.isMobileViewport()) {
this.mobileSearchHidden = false
}
}
@HostListener('window:scroll')
onWindowScroll(): void {
const currentScrollY = window.scrollY
if (!this.isMobileViewport() || this.isMenuCollapsed === false) {
this.mobileSearchHidden = false
this.lastScrollY = currentScrollY
return
}
const delta = currentScrollY - this.lastScrollY
if (currentScrollY <= 0 || delta < -SCROLL_THRESHOLD) {
this.mobileSearchHidden = false
} else if (currentScrollY > SCROLL_THRESHOLD && delta > SCROLL_THRESHOLD) {
this.mobileSearchHidden = true
}
this.lastScrollY = currentScrollY
}
private isMobileViewport(): boolean {
return window.innerWidth < 768
}
closeMenu() {
this.isMenuCollapsed = true
}

View File

@@ -31,8 +31,8 @@ export enum EditDialogMode {
@Directive()
export abstract class EditDialogComponent<
T extends ObjectWithPermissions | ObjectWithId,
>
T extends ObjectWithPermissions | ObjectWithId,
>
extends LoadingComponentWithPermissions
implements OnInit
{

View File

@@ -631,6 +631,59 @@ describe('FilterableDropdownComponent & FilterableDropdownSelectionModel', () =>
])
})
it('deselecting a parent clears selected descendants', () => {
const root: Tag = { id: 100, name: 'Root Tag' }
const child: Tag = { id: 101, name: 'Child Tag', parent: root.id }
const grandchild: Tag = {
id: 102,
name: 'Grandchild Tag',
parent: child.id,
}
const other: Tag = { id: 103, name: 'Other Tag' }
selectionModel.items = [root, child, grandchild, other]
selectionModel.set(root.id, ToggleableItemState.Selected, false)
selectionModel.set(child.id, ToggleableItemState.Selected, false)
selectionModel.set(grandchild.id, ToggleableItemState.Selected, false)
selectionModel.set(other.id, ToggleableItemState.Selected, false)
selectionModel.toggle(root.id, false)
expect(selectionModel.getSelectedItems()).toEqual([other])
})
it('un-excluding a parent clears excluded descendants', () => {
const root: Tag = { id: 110, name: 'Root Tag' }
const child: Tag = { id: 111, name: 'Child Tag', parent: root.id }
const other: Tag = { id: 112, name: 'Other Tag' }
selectionModel.items = [root, child, other]
selectionModel.set(root.id, ToggleableItemState.Excluded, false)
selectionModel.set(child.id, ToggleableItemState.Excluded, false)
selectionModel.set(other.id, ToggleableItemState.Excluded, false)
selectionModel.exclude(root.id, false)
expect(selectionModel.getExcludedItems()).toEqual([other])
})
it('excluding a selected parent clears selected descendants', () => {
const root: Tag = { id: 120, name: 'Root Tag' }
const child: Tag = { id: 121, name: 'Child Tag', parent: root.id }
const other: Tag = { id: 122, name: 'Other Tag' }
selectionModel.manyToOne = true
selectionModel.items = [root, child, other]
selectionModel.set(root.id, ToggleableItemState.Selected, false)
selectionModel.set(child.id, ToggleableItemState.Selected, false)
selectionModel.set(other.id, ToggleableItemState.Selected, false)
selectionModel.exclude(root.id, false)
expect(selectionModel.getExcludedItems()).toEqual([root])
expect(selectionModel.getSelectedItems()).toEqual([other])
})
it('resorts items immediately when document count sorting enabled', () => {
const apple: Tag = { id: 55, name: 'Apple' }
const zebra: Tag = { id: 56, name: 'Zebra' }

View File

@@ -235,6 +235,7 @@ export class FilterableDropdownSelectionModel {
state == ToggleableItemState.Excluded
) {
this.temporarySelectionStates.delete(id)
this.clearDescendantSelections(id)
}
if (!id) {
@@ -261,6 +262,7 @@ export class FilterableDropdownSelectionModel {
if (this.manyToOne || this.singleSelect) {
this.temporarySelectionStates.set(id, ToggleableItemState.Excluded)
this.clearDescendantSelections(id)
if (this.singleSelect) {
for (let key of this.temporarySelectionStates.keys()) {
@@ -281,9 +283,15 @@ export class FilterableDropdownSelectionModel {
newState = ToggleableItemState.NotSelected
}
this.temporarySelectionStates.set(id, newState)
if (newState == ToggleableItemState.Excluded) {
this.clearDescendantSelections(id)
}
}
} else if (!id || state == ToggleableItemState.Excluded) {
this.temporarySelectionStates.delete(id)
if (id) {
this.clearDescendantSelections(id)
}
}
if (fireEvent) {
@@ -295,6 +303,33 @@ export class FilterableDropdownSelectionModel {
return this.selectionStates.get(id) || ToggleableItemState.NotSelected
}
private clearDescendantSelections(id: number) {
for (const descendantID of this.getDescendantIDs(id)) {
this.temporarySelectionStates.delete(descendantID)
}
}
private getDescendantIDs(id: number): number[] {
const descendants: number[] = []
const queue: number[] = [id]
while (queue.length) {
const parentID = queue.shift()
for (const item of this._items) {
if (
typeof item?.id === 'number' &&
typeof (item as any)['parent'] === 'number' &&
(item as any)['parent'] === parentID
) {
descendants.push(item.id)
queue.push(item.id)
}
}
}
return descendants
}
get logicalOperator(): LogicalOperator {
return this.temporaryLogicalOperator
}

View File

@@ -950,8 +950,8 @@ describe('DocumentDetailComponent', () => {
it('should support reprocess, confirm and close modal after started', () => {
initNormally()
const bulkEditSpy = jest.spyOn(documentService, 'bulkEdit')
bulkEditSpy.mockReturnValue(of(true))
const reprocessSpy = jest.spyOn(documentService, 'reprocessDocuments')
reprocessSpy.mockReturnValue(of(true))
let openModal: NgbModalRef
modalService.activeInstances.subscribe((modal) => (openModal = modal[0]))
const modalSpy = jest.spyOn(modalService, 'open')
@@ -959,7 +959,7 @@ describe('DocumentDetailComponent', () => {
component.reprocess()
const modalCloseSpy = jest.spyOn(openModal, 'close')
openModal.componentInstance.confirmClicked.next()
expect(bulkEditSpy).toHaveBeenCalledWith([doc.id], 'reprocess', {})
expect(reprocessSpy).toHaveBeenCalledWith([doc.id])
expect(modalSpy).toHaveBeenCalled()
expect(toastSpy).toHaveBeenCalled()
expect(modalCloseSpy).toHaveBeenCalled()
@@ -967,13 +967,13 @@ describe('DocumentDetailComponent', () => {
it('should show error if redo ocr call fails', () => {
initNormally()
const bulkEditSpy = jest.spyOn(documentService, 'bulkEdit')
const reprocessSpy = jest.spyOn(documentService, 'reprocessDocuments')
let openModal: NgbModalRef
modalService.activeInstances.subscribe((modal) => (openModal = modal[0]))
const toastSpy = jest.spyOn(toastService, 'showError')
component.reprocess()
const modalCloseSpy = jest.spyOn(openModal, 'close')
bulkEditSpy.mockReturnValue(throwError(() => new Error('error occurred')))
reprocessSpy.mockReturnValue(throwError(() => new Error('error occurred')))
openModal.componentInstance.confirmClicked.next()
expect(toastSpy).toHaveBeenCalled()
expect(modalCloseSpy).not.toHaveBeenCalled()
@@ -1644,9 +1644,9 @@ describe('DocumentDetailComponent', () => {
expect(
fixture.debugElement.query(By.css('.preview-sticky img'))
).not.toBeUndefined()
;(component.document.mime_type =
;((component.document.mime_type =
'application/vnd.openxmlformats-officedocument.wordprocessingml.document'),
fixture.detectChanges()
fixture.detectChanges())
expect(component.archiveContentRenderType).toEqual(
component.ContentRenderType.Other
)
@@ -1669,18 +1669,15 @@ describe('DocumentDetailComponent', () => {
modal.componentInstance.pages = [{ page: 1, rotate: 0, splitAfter: false }]
modal.componentInstance.confirm()
let req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/edit_pdf/`
)
expect(req.request.body).toEqual({
documents: [10],
method: 'edit_pdf',
parameters: {
operations: [{ page: 1, rotate: 0, doc: 0 }],
delete_original: false,
update_document: false,
include_metadata: true,
source_mode: 'explicit_selection',
},
operations: [{ page: 1, rotate: 0, doc: 0 }],
delete_original: false,
update_document: false,
include_metadata: true,
source_mode: 'explicit_selection',
})
req.error(new ErrorEvent('failed'))
expect(errorSpy).toHaveBeenCalled()
@@ -1691,7 +1688,7 @@ describe('DocumentDetailComponent', () => {
modal.componentInstance.deleteOriginal = true
modal.componentInstance.confirm()
req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/edit_pdf/`
)
req.flush(true)
expect(closeSpy).toHaveBeenCalled()
@@ -1711,18 +1708,15 @@ describe('DocumentDetailComponent', () => {
dialog.deleteOriginal = true
dialog.confirm()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/remove_password/`
)
expect(req.request.body).toEqual({
documents: [10],
method: 'remove_password',
parameters: {
password: 'secret',
update_document: false,
include_metadata: false,
delete_original: true,
source_mode: 'explicit_selection',
},
password: 'secret',
update_document: false,
include_metadata: false,
delete_original: true,
source_mode: 'explicit_selection',
})
req.flush(true)
})
@@ -1737,7 +1731,7 @@ describe('DocumentDetailComponent', () => {
expect(errorSpy).toHaveBeenCalled()
httpTestingController.expectNone(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/remove_password/`
)
})
@@ -1753,7 +1747,7 @@ describe('DocumentDetailComponent', () => {
modal.componentInstance as PasswordRemovalConfirmDialogComponent
dialog.confirm()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/remove_password/`
)
req.error(new ErrorEvent('failed'))
@@ -1774,7 +1768,7 @@ describe('DocumentDetailComponent', () => {
modal.componentInstance as PasswordRemovalConfirmDialogComponent
dialog.confirm()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/remove_password/`
)
req.flush(true)

View File

@@ -1379,27 +1379,25 @@ export class DocumentDetailComponent
modal.componentInstance.btnCaption = $localize`Proceed`
modal.componentInstance.confirmClicked.subscribe(() => {
modal.componentInstance.buttonsEnabled = false
this.documentsService
.bulkEdit([this.document.id], 'reprocess', {})
.subscribe({
next: () => {
this.toastService.showInfo(
$localize`Reprocess operation for "${this.document.title}" will begin in the background.`
)
if (modal) {
modal.close()
}
},
error: (error) => {
if (modal) {
modal.componentInstance.buttonsEnabled = true
}
this.toastService.showError(
$localize`Error executing operation`,
error
)
},
})
this.documentsService.reprocessDocuments([this.document.id]).subscribe({
next: () => {
this.toastService.showInfo(
$localize`Reprocess operation for "${this.document.title}" will begin in the background.`
)
if (modal) {
modal.close()
}
},
error: (error) => {
if (modal) {
modal.componentInstance.buttonsEnabled = true
}
this.toastService.showError(
$localize`Error executing operation`,
error
)
},
})
})
}
@@ -1766,7 +1764,7 @@ export class DocumentDetailComponent
.subscribe(() => {
modal.componentInstance.buttonsEnabled = false
this.documentsService
.bulkEdit([sourceDocumentId], 'edit_pdf', {
.editPdfDocuments([sourceDocumentId], {
operations: modal.componentInstance.getOperations(),
delete_original: modal.componentInstance.deleteOriginal,
update_document:
@@ -1824,7 +1822,7 @@ export class DocumentDetailComponent
dialog.buttonsEnabled = false
this.networkActive = true
this.documentsService
.bulkEdit([sourceDocumentId], 'remove_password', {
.removePasswordDocuments([sourceDocumentId], {
password: this.password,
update_document: dialog.updateDocument,
include_metadata: dialog.includeMetadata,

View File

@@ -1,3 +1,4 @@
import { DatePipe } from '@angular/common'
import { provideHttpClient, withInterceptorsFromDi } from '@angular/common/http'
import {
HttpTestingController,
@@ -138,6 +139,7 @@ describe('BulkEditorComponent', () => {
},
},
FilterPipe,
DatePipe,
SettingsService,
{
provide: UserService,
@@ -849,13 +851,11 @@ describe('BulkEditorComponent', () => {
expect(modal).not.toBeUndefined()
modal.componentInstance.confirm()
let req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/delete/`
)
req.flush(true)
expect(req.request.body).toEqual({
documents: [3, 4],
method: 'delete',
parameters: {},
})
httpTestingController.match(
`${environment.apiBaseUrl}documents/?page=1&page_size=50&ordering=-created&truncate_content=true`
@@ -868,7 +868,7 @@ describe('BulkEditorComponent', () => {
fixture.detectChanges()
component.applyDelete()
req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/delete/`
)
})
@@ -944,13 +944,11 @@ describe('BulkEditorComponent', () => {
expect(modal).not.toBeUndefined()
modal.componentInstance.confirm()
let req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/reprocess/`
)
req.flush(true)
expect(req.request.body).toEqual({
documents: [3, 4],
method: 'reprocess',
parameters: {},
})
httpTestingController.match(
`${environment.apiBaseUrl}documents/?page=1&page_size=50&ordering=-created&truncate_content=true`
@@ -979,13 +977,13 @@ describe('BulkEditorComponent', () => {
modal.componentInstance.rotate()
modal.componentInstance.confirm()
let req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/rotate/`
)
req.flush(true)
expect(req.request.body).toEqual({
documents: [3, 4],
method: 'rotate',
parameters: { degrees: 90 },
degrees: 90,
source_mode: 'latest_version',
})
httpTestingController.match(
`${environment.apiBaseUrl}documents/?page=1&page_size=50&ordering=-created&truncate_content=true`
@@ -1021,13 +1019,12 @@ describe('BulkEditorComponent', () => {
modal.componentInstance.metadataDocumentID = 3
modal.componentInstance.confirm()
let req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/merge/`
)
req.flush(true)
expect(req.request.body).toEqual({
documents: [3, 4],
method: 'merge',
parameters: { metadata_document_id: 3 },
metadata_document_id: 3,
})
httpTestingController.match(
`${environment.apiBaseUrl}documents/?page=1&page_size=50&ordering=-created&truncate_content=true`
@@ -1040,13 +1037,13 @@ describe('BulkEditorComponent', () => {
modal.componentInstance.deleteOriginals = true
modal.componentInstance.confirm()
req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/merge/`
)
req.flush(true)
expect(req.request.body).toEqual({
documents: [3, 4],
method: 'merge',
parameters: { metadata_document_id: 3, delete_originals: true },
metadata_document_id: 3,
delete_originals: true,
})
httpTestingController.match(
`${environment.apiBaseUrl}documents/?page=1&page_size=50&ordering=-created&truncate_content=true`
@@ -1061,13 +1058,13 @@ describe('BulkEditorComponent', () => {
modal.componentInstance.archiveFallback = true
modal.componentInstance.confirm()
req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/merge/`
)
req.flush(true)
expect(req.request.body).toEqual({
documents: [3, 4],
method: 'merge',
parameters: { metadata_document_id: 3, archive_fallback: true },
metadata_document_id: 3,
archive_fallback: true,
})
httpTestingController.match(
`${environment.apiBaseUrl}documents/?page=1&page_size=50&ordering=-created&truncate_content=true`

View File

@@ -12,7 +12,7 @@ import {
} from '@ng-bootstrap/ng-bootstrap'
import { saveAs } from 'file-saver'
import { NgxBootstrapIconsModule } from 'ngx-bootstrap-icons'
import { first, map, Subject, switchMap, takeUntil } from 'rxjs'
import { first, map, Observable, Subject, switchMap, takeUntil } from 'rxjs'
import { ConfirmDialogComponent } from 'src/app/components/common/confirm-dialog/confirm-dialog.component'
import { CustomField } from 'src/app/data/custom-field'
import { MatchingModel } from 'src/app/data/matching-model'
@@ -29,7 +29,9 @@ import { CorrespondentService } from 'src/app/services/rest/correspondent.servic
import { CustomFieldsService } from 'src/app/services/rest/custom-fields.service'
import { DocumentTypeService } from 'src/app/services/rest/document-type.service'
import {
DocumentBulkEditMethod,
DocumentService,
MergeDocumentsRequest,
SelectionDataItem,
} from 'src/app/services/rest/document.service'
import { SavedViewService } from 'src/app/services/rest/saved-view.service'
@@ -255,9 +257,9 @@ export class BulkEditorComponent
this.unsubscribeNotifier.complete()
}
private executeBulkOperation(
private executeBulkEditMethod(
modal: NgbModalRef,
method: string,
method: DocumentBulkEditMethod,
args: any,
overrideDocumentIDs?: number[]
) {
@@ -272,32 +274,55 @@ export class BulkEditorComponent
)
.pipe(first())
.subscribe({
next: () => {
if (args['delete_originals']) {
this.list.selected.clear()
}
this.list.reload()
this.list.reduceSelectionToFilter()
this.list.selected.forEach((id) => {
this.openDocumentService.refreshDocument(id)
})
this.savedViewService.maybeRefreshDocumentCounts()
if (modal) {
modal.close()
}
},
error: (error) => {
if (modal) {
modal.componentInstance.buttonsEnabled = true
}
this.toastService.showError(
$localize`Error executing bulk operation`,
error
)
},
next: () => this.handleOperationSuccess(modal),
error: (error) => this.handleOperationError(modal, error),
})
}
private executeDocumentAction(
modal: NgbModalRef,
request: Observable<any>,
options: { deleteOriginals?: boolean } = {}
) {
if (modal) {
modal.componentInstance.buttonsEnabled = false
}
request.pipe(first()).subscribe({
next: () => {
this.handleOperationSuccess(modal, options.deleteOriginals ?? false)
},
error: (error) => this.handleOperationError(modal, error),
})
}
private handleOperationSuccess(
modal: NgbModalRef,
clearSelection: boolean = false
) {
if (clearSelection) {
this.list.selected.clear()
}
this.list.reload()
this.list.reduceSelectionToFilter()
this.list.selected.forEach((id) => {
this.openDocumentService.refreshDocument(id)
})
this.savedViewService.maybeRefreshDocumentCounts()
if (modal) {
modal.close()
}
}
private handleOperationError(modal: NgbModalRef, error: any) {
if (modal) {
modal.componentInstance.buttonsEnabled = true
}
this.toastService.showError(
$localize`Error executing bulk operation`,
error
)
}
private applySelectionData(
items: SelectionDataItem[],
selectionModel: FilterableDropdownSelectionModel
@@ -446,13 +471,13 @@ export class BulkEditorComponent
modal.componentInstance.confirmClicked
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
this.executeBulkOperation(modal, 'modify_tags', {
this.executeBulkEditMethod(modal, 'modify_tags', {
add_tags: changedTags.itemsToAdd.map((t) => t.id),
remove_tags: changedTags.itemsToRemove.map((t) => t.id),
})
})
} else {
this.executeBulkOperation(null, 'modify_tags', {
this.executeBulkEditMethod(null, 'modify_tags', {
add_tags: changedTags.itemsToAdd.map((t) => t.id),
remove_tags: changedTags.itemsToRemove.map((t) => t.id),
})
@@ -486,12 +511,12 @@ export class BulkEditorComponent
modal.componentInstance.confirmClicked
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
this.executeBulkOperation(modal, 'set_correspondent', {
this.executeBulkEditMethod(modal, 'set_correspondent', {
correspondent: correspondent ? correspondent.id : null,
})
})
} else {
this.executeBulkOperation(null, 'set_correspondent', {
this.executeBulkEditMethod(null, 'set_correspondent', {
correspondent: correspondent ? correspondent.id : null,
})
}
@@ -524,12 +549,12 @@ export class BulkEditorComponent
modal.componentInstance.confirmClicked
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
this.executeBulkOperation(modal, 'set_document_type', {
this.executeBulkEditMethod(modal, 'set_document_type', {
document_type: documentType ? documentType.id : null,
})
})
} else {
this.executeBulkOperation(null, 'set_document_type', {
this.executeBulkEditMethod(null, 'set_document_type', {
document_type: documentType ? documentType.id : null,
})
}
@@ -562,12 +587,12 @@ export class BulkEditorComponent
modal.componentInstance.confirmClicked
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
this.executeBulkOperation(modal, 'set_storage_path', {
this.executeBulkEditMethod(modal, 'set_storage_path', {
storage_path: storagePath ? storagePath.id : null,
})
})
} else {
this.executeBulkOperation(null, 'set_storage_path', {
this.executeBulkEditMethod(null, 'set_storage_path', {
storage_path: storagePath ? storagePath.id : null,
})
}
@@ -624,7 +649,7 @@ export class BulkEditorComponent
modal.componentInstance.confirmClicked
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
this.executeBulkOperation(modal, 'modify_custom_fields', {
this.executeBulkEditMethod(modal, 'modify_custom_fields', {
add_custom_fields: changedCustomFields.itemsToAdd.map((f) => f.id),
remove_custom_fields: changedCustomFields.itemsToRemove.map(
(f) => f.id
@@ -632,7 +657,7 @@ export class BulkEditorComponent
})
})
} else {
this.executeBulkOperation(null, 'modify_custom_fields', {
this.executeBulkEditMethod(null, 'modify_custom_fields', {
add_custom_fields: changedCustomFields.itemsToAdd.map((f) => f.id),
remove_custom_fields: changedCustomFields.itemsToRemove.map(
(f) => f.id
@@ -762,10 +787,16 @@ export class BulkEditorComponent
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
modal.componentInstance.buttonsEnabled = false
this.executeBulkOperation(modal, 'delete', {})
this.executeDocumentAction(
modal,
this.documentService.deleteDocuments(Array.from(this.list.selected))
)
})
} else {
this.executeBulkOperation(null, 'delete', {})
this.executeDocumentAction(
null,
this.documentService.deleteDocuments(Array.from(this.list.selected))
)
}
}
@@ -804,7 +835,12 @@ export class BulkEditorComponent
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
modal.componentInstance.buttonsEnabled = false
this.executeBulkOperation(modal, 'reprocess', {})
this.executeDocumentAction(
modal,
this.documentService.reprocessDocuments(
Array.from(this.list.selected)
)
)
})
}
@@ -815,7 +851,7 @@ export class BulkEditorComponent
modal.componentInstance.confirmClicked.subscribe(
({ permissions, merge }) => {
modal.componentInstance.buttonsEnabled = false
this.executeBulkOperation(modal, 'set_permissions', {
this.executeBulkEditMethod(modal, 'set_permissions', {
...permissions,
merge,
})
@@ -838,9 +874,13 @@ export class BulkEditorComponent
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
rotateDialog.buttonsEnabled = false
this.executeBulkOperation(modal, 'rotate', {
degrees: rotateDialog.degrees,
})
this.executeDocumentAction(
modal,
this.documentService.rotateDocuments(
Array.from(this.list.selected),
rotateDialog.degrees
)
)
})
}
@@ -856,18 +896,22 @@ export class BulkEditorComponent
mergeDialog.confirmClicked
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
const args = {}
const args: MergeDocumentsRequest = {}
if (mergeDialog.metadataDocumentID > -1) {
args['metadata_document_id'] = mergeDialog.metadataDocumentID
args.metadata_document_id = mergeDialog.metadataDocumentID
}
if (mergeDialog.deleteOriginals) {
args['delete_originals'] = true
args.delete_originals = true
}
if (mergeDialog.archiveFallback) {
args['archive_fallback'] = true
args.archive_fallback = true
}
mergeDialog.buttonsEnabled = false
this.executeBulkOperation(modal, 'merge', args, mergeDialog.documentIDs)
this.executeDocumentAction(
modal,
this.documentService.mergeDocuments(mergeDialog.documentIDs, args),
{ deleteOriginals: !!args.delete_originals }
)
this.toastService.showInfo(
$localize`Merged document will be queued for consumption.`
)

View File

@@ -15,7 +15,7 @@
}
@if (document && displayFields?.includes(DisplayField.TAGS)) {
<div class="tags d-flex flex-column text-end position-absolute me-1 fs-6">
<div class="tags d-flex flex-column text-end position-absolute me-1 fs-6" [class.tags-no-wrap]="document.tags.length > 3">
@for (tagID of tagIDs; track tagID) {
<pngx-tag [tagID]="tagID" (click)="clickTag.emit(tagID);$event.stopPropagation()" [clickable]="true" linkTitle="Toggle tag filter" i18n-linkTitle></pngx-tag>
}

View File

@@ -72,4 +72,14 @@ a {
max-width: 80%;
row-gap: .2rem;
line-height: 1;
&.tags-no-wrap {
::ng-deep .badge {
display: inline-block;
max-width: 100%;
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
}
}

View File

@@ -82,6 +82,16 @@ describe('DocumentCardSmallComponent', () => {
).toHaveLength(6)
})
it('should clear hidden tag counter when tag count falls below the limit', () => {
expect(component.moreTags).toEqual(3)
component.document.tags = [1, 2, 3, 4, 5, 6]
fixture.detectChanges()
expect(component.moreTags).toBeNull()
expect(fixture.nativeElement.textContent).not.toContain('+ 3')
})
it('should try to close the preview on mouse leave', () => {
component.popupPreview = {
close: jest.fn(),

View File

@@ -126,6 +126,7 @@ export class DocumentCardSmallComponent
this.moreTags = this.document.tags.length - (limit - 1)
return this.document.tags.slice(0, limit - 1)
} else {
this.moreTags = null
return this.document.tags
}
}

View File

@@ -56,13 +56,20 @@ $paperless-card-breakpoints: (
.sticky-top {
z-index: 990; // below main navbar
top: calc(7rem - 2px); // height of navbar (mobile)
top: calc(7rem - 2px); // height of navbar + search row (mobile)
transition: top 0.2s ease;
@media (min-width: 580px) {
top: 3.5rem; // height of navbar
}
}
@media (max-width: 579.98px) {
:host-context(main.mobile-search-hidden) .sticky-top {
top: calc(3.5rem - 2px); // height of navbar only when search is hidden
}
}
.table .form-check {
padding: 0.2rem;
min-height: 0;

View File

@@ -0,0 +1,122 @@
import {
HttpErrorResponse,
HttpHandlerFn,
HttpRequest,
} from '@angular/common/http'
import { throwError } from 'rxjs'
import * as navUtils from '../utils/navigation'
import { createAuthExpiryInterceptor } from './auth-expiry.interceptor'
describe('withAuthExpiryInterceptor', () => {
let interceptor: ReturnType<typeof createAuthExpiryInterceptor>
let dateNowSpy: jest.SpiedFunction<typeof Date.now>
beforeEach(() => {
interceptor = createAuthExpiryInterceptor()
dateNowSpy = jest.spyOn(Date, 'now').mockReturnValue(1000)
})
afterEach(() => {
jest.restoreAllMocks()
})
it('reloads when an API request returns 401', () => {
const reloadSpy = jest
.spyOn(navUtils, 'locationReload')
.mockImplementation(() => {})
interceptor(
new HttpRequest('GET', '/api/documents/'),
failingHandler('/api/documents/', 401)
).subscribe({
error: () => undefined,
})
expect(reloadSpy).toHaveBeenCalledTimes(1)
})
it('does not reload for non-401 errors', () => {
const reloadSpy = jest
.spyOn(navUtils, 'locationReload')
.mockImplementation(() => {})
interceptor(
new HttpRequest('GET', '/api/documents/'),
failingHandler('/api/documents/', 500)
).subscribe({
error: () => undefined,
})
expect(reloadSpy).not.toHaveBeenCalled()
})
it('does not reload for non-api 401 responses', () => {
const reloadSpy = jest
.spyOn(navUtils, 'locationReload')
.mockImplementation(() => {})
interceptor(
new HttpRequest('GET', '/accounts/profile/'),
failingHandler('/accounts/profile/', 401)
).subscribe({
error: () => undefined,
})
expect(reloadSpy).not.toHaveBeenCalled()
})
it('reloads only once even with multiple API 401 responses', () => {
const reloadSpy = jest
.spyOn(navUtils, 'locationReload')
.mockImplementation(() => {})
const request = new HttpRequest('GET', '/api/documents/')
const handler = failingHandler('/api/documents/', 401)
interceptor(request, handler).subscribe({
error: () => undefined,
})
interceptor(request, handler).subscribe({
error: () => undefined,
})
expect(reloadSpy).toHaveBeenCalledTimes(1)
})
it('retries reload after cooldown for repeated API 401 responses', () => {
const reloadSpy = jest
.spyOn(navUtils, 'locationReload')
.mockImplementation(() => {})
dateNowSpy
.mockReturnValueOnce(1000)
.mockReturnValueOnce(2500)
.mockReturnValueOnce(3501)
const request = new HttpRequest('GET', '/api/documents/')
const handler = failingHandler('/api/documents/', 401)
interceptor(request, handler).subscribe({
error: () => undefined,
})
interceptor(request, handler).subscribe({
error: () => undefined,
})
interceptor(request, handler).subscribe({
error: () => undefined,
})
expect(reloadSpy).toHaveBeenCalledTimes(2)
})
})
function failingHandler(url: string, status: number): HttpHandlerFn {
return (_request) =>
throwError(
() =>
new HttpErrorResponse({
status,
url,
})
)
}

View File

@@ -0,0 +1,37 @@
import {
HttpErrorResponse,
HttpEvent,
HttpHandlerFn,
HttpInterceptorFn,
HttpRequest,
} from '@angular/common/http'
import { catchError, Observable, throwError } from 'rxjs'
import { locationReload } from '../utils/navigation'
export const createAuthExpiryInterceptor = (): HttpInterceptorFn => {
let lastReloadAttempt = Number.NEGATIVE_INFINITY
return (
request: HttpRequest<unknown>,
next: HttpHandlerFn
): Observable<HttpEvent<unknown>> =>
next(request).pipe(
catchError((error: unknown) => {
if (
error instanceof HttpErrorResponse &&
error.status === 401 &&
request.url.includes('/api/')
) {
const now = Date.now()
if (now - lastReloadAttempt >= 2000) {
lastReloadAttempt = now
locationReload()
}
}
return throwError(() => error)
})
)
}
export const withAuthExpiryInterceptor = createAuthExpiryInterceptor()

View File

@@ -21,6 +21,7 @@ import {
FILTER_HAS_TAGS_ANY,
} from '../data/filter-rule-type'
import { SavedView } from '../data/saved-view'
import { DOCUMENT_LIST_SERVICE } from '../data/storage-keys'
import { SETTINGS_KEYS } from '../data/ui-settings'
import { PermissionsGuard } from '../guards/permissions.guard'
import { DocumentListViewService } from './document-list-view.service'
@@ -248,6 +249,29 @@ describe('DocumentListViewService', () => {
expect(documentListViewService.sortReverse).toBeTruthy()
})
it('restores only known list view state fields from local storage', () => {
try {
localStorage.setItem(
DOCUMENT_LIST_SERVICE.CURRENT_VIEW_CONFIG,
'{"currentPage":3,"sortField":"title","sortReverse":false,"__proto__":{"polluted":true},"injected":"ignored"}'
)
const restoredService = TestBed.runInInjectionContext(
() => new DocumentListViewService()
)
expect(restoredService.currentPage).toEqual(3)
expect(restoredService.sortField).toEqual('title')
expect(restoredService.sortReverse).toBeFalsy()
expect(
(restoredService as any).activeListViewState.injected
).toBeUndefined()
expect(({} as any).polluted).toBeUndefined()
} finally {
localStorage.removeItem(DOCUMENT_LIST_SERVICE.CURRENT_VIEW_CONFIG)
}
})
it('should load from query params', () => {
expect(documentListViewService.currentPage).toEqual(1)
const page = 2

View File

@@ -24,6 +24,20 @@ const LIST_DEFAULT_DISPLAY_FIELDS: DisplayField[] = DEFAULT_DISPLAY_FIELDS.map(
(f) => f.id
).filter((f) => f !== DisplayField.ADDED)
const RESTORABLE_LIST_VIEW_STATE_KEYS: (keyof ListViewState)[] = [
'title',
'documents',
'currentPage',
'collectionSize',
'sortField',
'sortReverse',
'filterRules',
'selected',
'pageSize',
'displayMode',
'displayFields',
]
/**
* Captures the current state of the list view.
*/
@@ -112,6 +126,32 @@ export class DocumentListViewService {
private displayFieldsInitialized: boolean = false
private restoreListViewState(savedState: unknown): ListViewState {
const newState = this.defaultListViewState()
if (
!savedState ||
typeof savedState !== 'object' ||
Array.isArray(savedState)
) {
return newState
}
const parsedState = savedState as Partial<
Record<keyof ListViewState, unknown>
>
const mutableState = newState as Record<keyof ListViewState, unknown>
for (const key of RESTORABLE_LIST_VIEW_STATE_KEYS) {
const value = parsedState[key]
if (value != null) {
mutableState[key] = value
}
}
return newState
}
get activeSavedViewId() {
return this._activeSavedViewId
}
@@ -127,14 +167,7 @@ export class DocumentListViewService {
if (documentListViewConfigJson) {
try {
let savedState: ListViewState = JSON.parse(documentListViewConfigJson)
// Remove null elements from the restored state
Object.keys(savedState).forEach((k) => {
if (savedState[k] == null) {
delete savedState[k]
}
})
// only use restored state attributes instead of defaults if they are not null
let newState = Object.assign(this.defaultListViewState(), savedState)
let newState = this.restoreListViewState(savedState)
this.listViewStates.set(null, newState)
} catch (e) {
localStorage.removeItem(DOCUMENT_LIST_SERVICE.CURRENT_VIEW_CONFIG)

View File

@@ -230,6 +230,88 @@ describe(`DocumentService`, () => {
})
})
it('should call appropriate api endpoint for delete documents', () => {
const ids = [1, 2, 3]
subscription = service.deleteDocuments(ids).subscribe()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}${endpoint}/delete/`
)
expect(req.request.method).toEqual('POST')
expect(req.request.body).toEqual({
documents: ids,
})
})
it('should call appropriate api endpoint for reprocess documents', () => {
const ids = [1, 2, 3]
subscription = service.reprocessDocuments(ids).subscribe()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}${endpoint}/reprocess/`
)
expect(req.request.method).toEqual('POST')
expect(req.request.body).toEqual({
documents: ids,
})
})
it('should call appropriate api endpoint for rotate documents', () => {
const ids = [1, 2, 3]
subscription = service.rotateDocuments(ids, 90).subscribe()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}${endpoint}/rotate/`
)
expect(req.request.method).toEqual('POST')
expect(req.request.body).toEqual({
documents: ids,
degrees: 90,
source_mode: 'latest_version',
})
})
it('should call appropriate api endpoint for merge documents', () => {
const ids = [1, 2, 3]
const args = { metadata_document_id: 1, delete_originals: true }
subscription = service.mergeDocuments(ids, args).subscribe()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}${endpoint}/merge/`
)
expect(req.request.method).toEqual('POST')
expect(req.request.body).toEqual({
documents: ids,
metadata_document_id: 1,
delete_originals: true,
})
})
it('should call appropriate api endpoint for edit pdf', () => {
const ids = [1]
const args = { operations: [{ page: 1, rotate: 90, doc: 0 }] }
subscription = service.editPdfDocuments(ids, args).subscribe()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}${endpoint}/edit_pdf/`
)
expect(req.request.method).toEqual('POST')
expect(req.request.body).toEqual({
documents: ids,
operations: [{ page: 1, rotate: 90, doc: 0 }],
})
})
it('should call appropriate api endpoint for remove password', () => {
const ids = [1]
const args = { password: 'secret', update_document: true }
subscription = service.removePasswordDocuments(ids, args).subscribe()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}${endpoint}/remove_password/`
)
expect(req.request.method).toEqual('POST')
expect(req.request.body).toEqual({
documents: ids,
password: 'secret',
update_document: true,
})
})
it('should return the correct preview URL for a single document', () => {
let url = service.getPreviewUrl(documents[0].id)
expect(url).toEqual(

View File

@@ -42,6 +42,45 @@ export enum BulkEditSourceMode {
EXPLICIT_SELECTION = 'explicit_selection',
}
export type DocumentBulkEditMethod =
| 'set_correspondent'
| 'set_document_type'
| 'set_storage_path'
| 'add_tag'
| 'remove_tag'
| 'modify_tags'
| 'modify_custom_fields'
| 'set_permissions'
export interface MergeDocumentsRequest {
metadata_document_id?: number
delete_originals?: boolean
archive_fallback?: boolean
source_mode?: BulkEditSourceMode
}
export interface EditPdfOperation {
page: number
rotate?: number
doc?: number
}
export interface EditPdfDocumentsRequest {
operations: EditPdfOperation[]
delete_original?: boolean
update_document?: boolean
include_metadata?: boolean
source_mode?: BulkEditSourceMode
}
export interface RemovePasswordDocumentsRequest {
password: string
update_document?: boolean
delete_original?: boolean
include_metadata?: boolean
source_mode?: BulkEditSourceMode
}
@Injectable({
providedIn: 'root',
})
@@ -299,7 +338,7 @@ export class DocumentService extends AbstractPaperlessService<Document> {
return this.http.get<DocumentMetadata>(url.toString())
}
bulkEdit(ids: number[], method: string, args: any) {
bulkEdit(ids: number[], method: DocumentBulkEditMethod, args: any) {
return this.http.post(this.getResourceUrl(null, 'bulk_edit'), {
documents: ids,
method: method,
@@ -307,6 +346,54 @@ export class DocumentService extends AbstractPaperlessService<Document> {
})
}
deleteDocuments(ids: number[]) {
return this.http.post(this.getResourceUrl(null, 'delete'), {
documents: ids,
})
}
reprocessDocuments(ids: number[]) {
return this.http.post(this.getResourceUrl(null, 'reprocess'), {
documents: ids,
})
}
rotateDocuments(
ids: number[],
degrees: number,
sourceMode: BulkEditSourceMode = BulkEditSourceMode.LATEST_VERSION
) {
return this.http.post(this.getResourceUrl(null, 'rotate'), {
documents: ids,
degrees,
source_mode: sourceMode,
})
}
mergeDocuments(ids: number[], request: MergeDocumentsRequest = {}) {
return this.http.post(this.getResourceUrl(null, 'merge'), {
documents: ids,
...request,
})
}
editPdfDocuments(ids: number[], request: EditPdfDocumentsRequest) {
return this.http.post(this.getResourceUrl(null, 'edit_pdf'), {
documents: ids,
...request,
})
}
removePasswordDocuments(
ids: number[],
request: RemovePasswordDocumentsRequest
) {
return this.http.post(this.getResourceUrl(null, 'remove_password'), {
documents: ids,
...request,
})
}
getSelectionData(ids: number[]): Observable<SelectionData> {
return this.http.post<SelectionData>(
this.getResourceUrl(null, 'selection_data'),

View File

@@ -166,6 +166,23 @@ describe('SettingsService', () => {
expect(settingsService.get(SETTINGS_KEYS.THEME_COLOR)).toEqual('#9fbf2f')
})
it('ignores unsafe top-level keys from loaded settings', () => {
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}ui_settings/`
)
const payload = JSON.parse(
JSON.stringify(ui_settings).replace(
'"settings":{',
'"settings":{"__proto__":{"polluted":"yes"},'
)
)
payload.settings.app_title = 'Safe Title'
req.flush(payload)
expect(settingsService.get(SETTINGS_KEYS.APP_TITLE)).toEqual('Safe Title')
expect(({} as any).polluted).toBeUndefined()
})
it('correctly allows updating settings of various types', () => {
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}ui_settings/`

View File

@@ -276,6 +276,8 @@ const ISO_LANGUAGE_OPTION: LanguageOption = {
dateInputFormat: 'yyyy-mm-dd',
}
const UNSAFE_OBJECT_KEYS = new Set(['__proto__', 'prototype', 'constructor'])
@Injectable({
providedIn: 'root',
})
@@ -291,7 +293,7 @@ export class SettingsService {
protected baseUrl: string = environment.apiBaseUrl + 'ui_settings/'
private settings: Object = {}
private settings: Record<string, any> = {}
currentUser: User
public settingsSaved: EventEmitter<any> = new EventEmitter()
@@ -320,6 +322,21 @@ export class SettingsService {
this._renderer = rendererFactory.createRenderer(null, null)
}
private isSafeObjectKey(key: string): boolean {
return !UNSAFE_OBJECT_KEYS.has(key)
}
private assignSafeSettings(source: Record<string, any>) {
if (!source || typeof source !== 'object' || Array.isArray(source)) {
return
}
for (const key of Object.keys(source)) {
if (!this.isSafeObjectKey(key)) continue
this.settings[key] = source[key]
}
}
// this is called by the app initializer in app.module
public initializeSettings(): Observable<UiSettings> {
return this.http.get<UiSettings>(this.baseUrl).pipe(
@@ -338,7 +355,7 @@ export class SettingsService {
})
}),
tap((uisettings) => {
Object.assign(this.settings, uisettings.settings)
this.assignSafeSettings(uisettings.settings)
if (this.get(SETTINGS_KEYS.APP_TITLE)?.length) {
environment.appTitle = this.get(SETTINGS_KEYS.APP_TITLE)
}
@@ -533,7 +550,11 @@ export class SettingsService {
let settingObj = this.settings
keys.forEach((keyPart, index) => {
keyPart = keyPart.replace(/-/g, '_')
if (!settingObj.hasOwnProperty(keyPart)) return
if (
!this.isSafeObjectKey(keyPart) ||
!Object.prototype.hasOwnProperty.call(settingObj, keyPart)
)
return
if (index == keys.length - 1) value = settingObj[keyPart]
else settingObj = settingObj[keyPart]
})
@@ -579,7 +600,9 @@ export class SettingsService {
const keys = key.replace('general-settings:', '').split(':')
keys.forEach((keyPart, index) => {
keyPart = keyPart.replace(/-/g, '_')
if (!settingObj.hasOwnProperty(keyPart)) settingObj[keyPart] = {}
if (!this.isSafeObjectKey(keyPart)) return
if (!Object.prototype.hasOwnProperty.call(settingObj, keyPart))
settingObj[keyPart] = {}
if (index == keys.length - 1) settingObj[keyPart] = value
else settingObj = settingObj[keyPart]
})
@@ -602,7 +625,10 @@ export class SettingsService {
maybeMigrateSettings() {
if (
!this.settings.hasOwnProperty('documentListSize') &&
!Object.prototype.hasOwnProperty.call(
this.settings,
'documentListSize'
) &&
localStorage.getItem(SETTINGS_KEYS.DOCUMENT_LIST_SIZE)
) {
// lets migrate
@@ -610,8 +636,7 @@ export class SettingsService {
const errorMessage = $localize`Unable to migrate settings to the database, please try saving manually.`
try {
for (const setting in SETTINGS_KEYS) {
const key = SETTINGS_KEYS[setting]
for (const key of Object.values(SETTINGS_KEYS)) {
const value = localStorage.getItem(key)
this.set(key, value)
}

View File

@@ -62,7 +62,7 @@ export function hslToRgb(h, s, l) {
* @return Array The HSL representation
*/
export function rgbToHsl(r, g, b) {
;(r /= 255), (g /= 255), (b /= 255)
;((r /= 255), (g /= 255), (b /= 255))
var max = Math.max(r, g, b),
min = Math.min(r, g, b)
var h,

View File

@@ -6,7 +6,7 @@ export const environment = {
apiVersion: '10', // match src/paperless/settings.py
appTitle: 'Paperless-ngx',
tag: 'prod',
version: '2.20.10',
version: '2.20.13',
webSocketHost: window.location.host,
webSocketProtocol: window.location.protocol == 'https:' ? 'wss:' : 'ws:',
webSocketBaseUrl: base_url.pathname + 'ws/',

View File

@@ -5,7 +5,7 @@
export const environment = {
production: false,
apiBaseUrl: 'http://localhost:8000/api/',
apiVersion: '9',
apiVersion: '10',
appTitle: 'Paperless-ngx',
tag: 'dev',
version: 'DEVELOPMENT',

View File

@@ -154,6 +154,7 @@ import { DirtyDocGuard } from './app/guards/dirty-doc.guard'
import { DirtySavedViewGuard } from './app/guards/dirty-saved-view.guard'
import { PermissionsGuard } from './app/guards/permissions.guard'
import { withApiVersionInterceptor } from './app/interceptors/api-version.interceptor'
import { withAuthExpiryInterceptor } from './app/interceptors/auth-expiry.interceptor'
import { withCsrfInterceptor } from './app/interceptors/csrf.interceptor'
import { DocumentTitlePipe } from './app/pipes/document-title.pipe'
import { FilterPipe } from './app/pipes/filter.pipe'
@@ -399,7 +400,11 @@ bootstrapApplication(AppComponent, {
StoragePathNamePipe,
provideHttpClient(
withInterceptorsFromDi(),
withInterceptors([withCsrfInterceptor, withApiVersionInterceptor]),
withInterceptors([
withCsrfInterceptor,
withApiVersionInterceptor,
withAuthExpiryInterceptor,
]),
withFetch()
),
provideUiTour({

View File

@@ -150,6 +150,15 @@ $form-check-radio-checked-bg-image-dark: url("data:image/svg+xml,<svg xmlns='htt
background-color: var(--pngx-body-color-accent);
}
.list-group-item-action:not(.active):active {
--bs-list-group-action-active-color: var(--bs-body-color);
--bs-list-group-action-active-bg: var(--pngx-bg-darker);
}
.form-control:hover::file-selector-button {
background-color:var(--pngx-bg-dark) !important
}
.search-container {
input, input:focus, i-bs[name="search"] , ::placeholder {
color: var(--pngx-primary-text-contrast) !important;

View File

@@ -576,8 +576,8 @@ def merge(
except Exception:
restore_archive_serial_numbers(backup)
raise
else:
consume_task.delay()
else:
consume_task.delay()
return "OK"

View File

@@ -3,25 +3,20 @@ from django.core.checks import Error
from django.core.checks import Warning
from django.core.checks import register
from documents.signals import document_consumer_declaration
from documents.templating.utils import convert_format_str_to_template_format
from paperless.parsers.registry import get_parser_registry
@register()
def parser_check(app_configs, **kwargs):
parsers = []
for response in document_consumer_declaration.send(None):
parsers.append(response[1])
if len(parsers) == 0:
if not get_parser_registry().all_parsers():
return [
Error(
"No parsers found. This is a bug. The consumer won't be "
"able to consume any documents without parsers.",
),
]
else:
return []
return []
@register()

View File

@@ -9,6 +9,7 @@ from pathlib import Path
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from collections.abc import Callable
from collections.abc import Iterator
from datetime import datetime
@@ -191,7 +192,12 @@ class DocumentClassifier:
target_file_temp.rename(target_file)
def train(self) -> bool:
def train(
self,
status_callback: Callable[[str], None] | None = None,
) -> bool:
notify = status_callback if status_callback is not None else lambda _: None
# Get non-inbox documents
docs_queryset = (
Document.objects.exclude(
@@ -213,6 +219,7 @@ class DocumentClassifier:
# Step 1: Extract and preprocess training data from the database.
logger.debug("Gathering data from database...")
notify(f"Gathering data from {docs_queryset.count()} document(s)...")
hasher = sha256()
for doc in docs_queryset:
y = -1
@@ -290,6 +297,7 @@ class DocumentClassifier:
# Step 2: vectorize data
logger.debug("Vectorizing data...")
notify("Vectorizing document content...")
def content_generator() -> Iterator[str]:
"""
@@ -316,6 +324,7 @@ class DocumentClassifier:
# Step 3: train the classifiers
if num_tags > 0:
logger.debug("Training tags classifier...")
notify(f"Training tags classifier ({num_tags} tag(s))...")
if num_tags == 1:
# Special case where only one tag has auto:
@@ -339,6 +348,9 @@ class DocumentClassifier:
if num_correspondents > 0:
logger.debug("Training correspondent classifier...")
notify(
f"Training correspondent classifier ({num_correspondents} correspondent(s))...",
)
self.correspondent_classifier = MLPClassifier(tol=0.01)
self.correspondent_classifier.fit(data_vectorized, labels_correspondent)
else:
@@ -349,6 +361,9 @@ class DocumentClassifier:
if num_document_types > 0:
logger.debug("Training document type classifier...")
notify(
f"Training document type classifier ({num_document_types} type(s))...",
)
self.document_type_classifier = MLPClassifier(tol=0.01)
self.document_type_classifier.fit(data_vectorized, labels_document_type)
else:
@@ -361,6 +376,7 @@ class DocumentClassifier:
logger.debug(
"Training storage paths classifier...",
)
notify(f"Training storage path classifier ({num_storage_paths} path(s))...")
self.storage_path_classifier = MLPClassifier(tol=0.01)
self.storage_path_classifier.fit(
data_vectorized,

View File

@@ -1,6 +1,6 @@
import datetime
import hashlib
import os
import shutil
import tempfile
from enum import StrEnum
from pathlib import Path
@@ -32,9 +32,7 @@ from documents.models import DocumentType
from documents.models import StoragePath
from documents.models import Tag
from documents.models import WorkflowTrigger
from documents.parsers import DocumentParser
from documents.parsers import ParseError
from documents.parsers import get_parser_class_for_mime_type
from documents.permissions import set_permissions_for_object
from documents.plugins.base import AlwaysRunPluginMixin
from documents.plugins.base import ConsumeTaskPlugin
@@ -48,10 +46,13 @@ from documents.signals import document_consumption_started
from documents.signals import document_updated
from documents.signals.handlers import run_workflows
from documents.templating.workflows import parse_w_workflow_placeholders
from documents.utils import compute_checksum
from documents.utils import copy_basic_file_stats
from documents.utils import copy_file_with_basic_stats
from documents.utils import run_subprocess
from paperless_mail.parsers import MailDocumentParser
from paperless.parsers import ParserContext
from paperless.parsers import ParserProtocol
from paperless.parsers.registry import get_parser_registry
LOGGING_NAME: Final[str] = "paperless.consumer"
@@ -196,9 +197,7 @@ class ConsumerPlugin(
version_doc = Document(
root_document=root_doc_frozen,
version_index=next_version_index + 1,
checksum=hashlib.md5(
file_for_checksum.read_bytes(),
).hexdigest(),
checksum=compute_checksum(file_for_checksum),
content=text or "",
page_count=page_count,
mime_type=mime_type,
@@ -338,18 +337,15 @@ class ConsumerPlugin(
Return the document object if it was successfully created.
"""
tempdir = None
# Preflight has already run including progress update to 0%
self.log.info(f"Consuming {self.filename}")
try:
# Preflight has already run including progress update to 0%
self.log.info(f"Consuming {self.filename}")
# For the actual work, copy the file into a tempdir
tempdir = tempfile.TemporaryDirectory(
prefix="paperless-ngx",
dir=settings.SCRATCH_DIR,
)
self.working_copy = Path(tempdir.name) / Path(self.filename)
# For the actual work, copy the file into a tempdir
with tempfile.TemporaryDirectory(
prefix="paperless-ngx",
dir=settings.SCRATCH_DIR,
) as tmpdir:
self.working_copy = Path(tmpdir) / Path(self.filename)
copy_file_with_basic_stats(self.input_doc.original_file, self.working_copy)
self.unmodified_original = None
@@ -381,7 +377,7 @@ class ConsumerPlugin(
self.log.debug(f"Detected mime type after qpdf: {mime_type}")
# Save the original file for later
self.unmodified_original = (
Path(tempdir.name) / Path("uo") / Path(self.filename)
Path(tmpdir) / Path("uo") / Path(self.filename)
)
self.unmodified_original.parent.mkdir(exist_ok=True)
copy_file_with_basic_stats(
@@ -392,11 +388,14 @@ class ConsumerPlugin(
self.log.error(f"Error attempting to clean PDF: {e}")
# Based on the mime type, get the parser for that type
parser_class: type[DocumentParser] | None = get_parser_class_for_mime_type(
mime_type,
parser_class: type[ParserProtocol] | None = (
get_parser_registry().get_parser_for_file(
mime_type,
self.filename,
self.working_copy,
)
)
if not parser_class:
tempdir.cleanup()
self._fail(
ConsumerStatusShortMessage.UNSUPPORTED_TYPE,
f"Unsupported mime type {mime_type}",
@@ -411,300 +410,275 @@ class ConsumerPlugin(
)
self.run_pre_consume_script()
except:
if tempdir:
tempdir.cleanup()
raise
def progress_callback(
current_progress,
max_progress,
) -> None: # pragma: no cover
# recalculate progress to be within 20 and 80
p = int((current_progress / max_progress) * 50 + 20)
self._send_progress(p, 100, ProgressStatusOptions.WORKING)
# This doesn't parse the document yet, but gives us a parser.
document_parser: DocumentParser = parser_class(
self.logging_group,
progress_callback=progress_callback,
)
self.log.debug(f"Parser: {type(document_parser).__name__}")
# Parse the document. This may take some time.
text = None
date = None
thumbnail = None
archive_path = None
page_count = None
try:
self._send_progress(
20,
100,
ProgressStatusOptions.WORKING,
ConsumerStatusShortMessage.PARSING_DOCUMENT,
)
self.log.debug(f"Parsing {self.filename}...")
if (
isinstance(document_parser, MailDocumentParser)
and self.input_doc.mailrule_id
):
document_parser.parse(
self.working_copy,
mime_type,
self.filename,
self.input_doc.mailrule_id,
# This doesn't parse the document yet, but gives us a parser.
with parser_class() as document_parser:
document_parser.configure(
ParserContext(mailrule_id=self.input_doc.mailrule_id),
)
else:
document_parser.parse(self.working_copy, mime_type, self.filename)
self.log.debug(f"Generating thumbnail for {self.filename}...")
self._send_progress(
70,
100,
ProgressStatusOptions.WORKING,
ConsumerStatusShortMessage.GENERATING_THUMBNAIL,
)
thumbnail = document_parser.get_thumbnail(
self.working_copy,
mime_type,
self.filename,
)
self.log.debug(
f"Parser: {document_parser.name} v{document_parser.version}",
)
# Parse the document. This may take some time.
text = None
date = None
thumbnail = None
archive_path = None
page_count = None
try:
self._send_progress(
20,
100,
ProgressStatusOptions.WORKING,
ConsumerStatusShortMessage.PARSING_DOCUMENT,
)
self.log.debug(f"Parsing {self.filename}...")
document_parser.parse(self.working_copy, mime_type)
self.log.debug(f"Generating thumbnail for {self.filename}...")
self._send_progress(
70,
100,
ProgressStatusOptions.WORKING,
ConsumerStatusShortMessage.GENERATING_THUMBNAIL,
)
thumbnail = document_parser.get_thumbnail(
self.working_copy,
mime_type,
)
text = document_parser.get_text()
date = document_parser.get_date()
if date is None:
self._send_progress(
90,
100,
ProgressStatusOptions.WORKING,
ConsumerStatusShortMessage.PARSE_DATE,
)
with get_date_parser() as date_parser:
date = next(date_parser.parse(self.filename, text), None)
archive_path = document_parser.get_archive_path()
page_count = document_parser.get_page_count(
self.working_copy,
mime_type,
)
except ParseError as e:
self._fail(
str(e),
f"Error occurred while consuming document {self.filename}: {e}",
exc_info=True,
exception=e,
)
except Exception as e:
self._fail(
str(e),
f"Unexpected error while consuming document {self.filename}: {e}",
exc_info=True,
exception=e,
)
# Prepare the document classifier.
# TODO: I don't really like to do this here, but this way we avoid
# reloading the classifier multiple times, since there are multiple
# post-consume hooks that all require the classifier.
classifier = load_classifier()
text = document_parser.get_text()
date = document_parser.get_date()
if date is None:
self._send_progress(
90,
95,
100,
ProgressStatusOptions.WORKING,
ConsumerStatusShortMessage.PARSE_DATE,
ConsumerStatusShortMessage.SAVE_DOCUMENT,
)
with get_date_parser() as date_parser:
date = next(date_parser.parse(self.filename, text), None)
archive_path = document_parser.get_archive_path()
page_count = document_parser.get_page_count(self.working_copy, mime_type)
except ParseError as e:
document_parser.cleanup()
if tempdir:
tempdir.cleanup()
self._fail(
str(e),
f"Error occurred while consuming document {self.filename}: {e}",
exc_info=True,
exception=e,
)
except Exception as e:
document_parser.cleanup()
if tempdir:
tempdir.cleanup()
self._fail(
str(e),
f"Unexpected error while consuming document {self.filename}: {e}",
exc_info=True,
exception=e,
)
# Prepare the document classifier.
# TODO: I don't really like to do this here, but this way we avoid
# reloading the classifier multiple times, since there are multiple
# post-consume hooks that all require the classifier.
classifier = load_classifier()
self._send_progress(
95,
100,
ProgressStatusOptions.WORKING,
ConsumerStatusShortMessage.SAVE_DOCUMENT,
)
# now that everything is done, we can start to store the document
# in the system. This will be a transaction and reasonably fast.
try:
with transaction.atomic():
# store the document.
if self.input_doc.root_document_id:
# If this is a new version of an existing document, we need
# to make sure we're not creating a new document, but updating
# the existing one.
root_doc = Document.objects.get(
pk=self.input_doc.root_document_id,
)
original_document = self._create_version_from_root(
root_doc,
text=text,
page_count=page_count,
mime_type=mime_type,
)
actor = None
# Save the new version, potentially creating an audit log entry for the version addition if enabled.
if (
settings.AUDIT_LOG_ENABLED
and self.metadata.actor_id is not None
):
actor = User.objects.filter(pk=self.metadata.actor_id).first()
if actor is not None:
from auditlog.context import ( # type: ignore[import-untyped]
set_actor,
# now that everything is done, we can start to store the document
# in the system. This will be a transaction and reasonably fast.
try:
with transaction.atomic():
# store the document.
if self.input_doc.root_document_id:
# If this is a new version of an existing document, we need
# to make sure we're not creating a new document, but updating
# the existing one.
root_doc = Document.objects.get(
pk=self.input_doc.root_document_id,
)
original_document = self._create_version_from_root(
root_doc,
text=text,
page_count=page_count,
mime_type=mime_type,
)
actor = None
with set_actor(actor):
# Save the new version, potentially creating an audit log entry for the version addition if enabled.
if (
settings.AUDIT_LOG_ENABLED
and self.metadata.actor_id is not None
):
actor = User.objects.filter(
pk=self.metadata.actor_id,
).first()
if actor is not None:
from auditlog.context import ( # type: ignore[import-untyped]
set_actor,
)
with set_actor(actor):
original_document.save()
else:
original_document.save()
else:
original_document.save()
# Create a log entry for the version addition, if enabled
if settings.AUDIT_LOG_ENABLED:
from auditlog.models import ( # type: ignore[import-untyped]
LogEntry,
)
LogEntry.objects.log_create(
instance=root_doc,
changes={
"Version Added": ["None", original_document.id],
},
action=LogEntry.Action.UPDATE,
actor=actor,
additional_data={
"reason": "Version added",
"version_id": original_document.id,
},
)
document = original_document
else:
original_document.save()
else:
original_document.save()
# Create a log entry for the version addition, if enabled
if settings.AUDIT_LOG_ENABLED:
from auditlog.models import ( # type: ignore[import-untyped]
LogEntry,
)
LogEntry.objects.log_create(
instance=root_doc,
changes={
"Version Added": ["None", original_document.id],
},
action=LogEntry.Action.UPDATE,
actor=actor,
additional_data={
"reason": "Version added",
"version_id": original_document.id,
},
)
document = original_document
else:
document = self._store(
text=text,
date=date,
page_count=page_count,
mime_type=mime_type,
)
# If we get here, it was successful. Proceed with post-consume
# hooks. If they fail, nothing will get changed.
document_consumption_finished.send(
sender=self.__class__,
document=document,
logging_group=self.logging_group,
classifier=classifier,
original_file=self.unmodified_original
if self.unmodified_original
else self.working_copy,
)
# After everything is in the database, copy the files into
# place. If this fails, we'll also rollback the transaction.
with FileLock(settings.MEDIA_LOCK):
generated_filename = generate_unique_filename(document)
if (
len(str(generated_filename))
> Document.MAX_STORED_FILENAME_LENGTH
):
self.log.warning(
"Generated source filename exceeds db path limit, falling back to default naming",
)
generated_filename = generate_filename(
document,
use_format=False,
)
document.filename = generated_filename
create_source_path_directory(document.source_path)
self._write(
self.unmodified_original
if self.unmodified_original is not None
else self.working_copy,
document.source_path,
)
self._write(
thumbnail,
document.thumbnail_path,
)
if archive_path and Path(archive_path).is_file():
generated_archive_filename = generate_unique_filename(
document,
archive_filename=True,
)
if (
len(str(generated_archive_filename))
> Document.MAX_STORED_FILENAME_LENGTH
):
self.log.warning(
"Generated archive filename exceeds db path limit, falling back to default naming",
document = self._store(
text=text,
date=date,
page_count=page_count,
mime_type=mime_type,
)
generated_archive_filename = generate_filename(
document,
archive_filename=True,
use_format=False,
)
document.archive_filename = generated_archive_filename
create_source_path_directory(document.archive_path)
self._write(
archive_path,
document.archive_path,
# If we get here, it was successful. Proceed with post-consume
# hooks. If they fail, nothing will get changed.
document_consumption_finished.send(
sender=self.__class__,
document=document,
logging_group=self.logging_group,
classifier=classifier,
original_file=self.unmodified_original
if self.unmodified_original
else self.working_copy,
)
with Path(archive_path).open("rb") as f:
document.archive_checksum = hashlib.md5(
f.read(),
).hexdigest()
# After everything is in the database, copy the files into
# place. If this fails, we'll also rollback the transaction.
with FileLock(settings.MEDIA_LOCK):
generated_filename = generate_unique_filename(document)
if (
len(str(generated_filename))
> Document.MAX_STORED_FILENAME_LENGTH
):
self.log.warning(
"Generated source filename exceeds db path limit, falling back to default naming",
)
generated_filename = generate_filename(
document,
use_format=False,
)
document.filename = generated_filename
create_source_path_directory(document.source_path)
# Don't save with the lock active. Saving will cause the file
# renaming logic to acquire the lock as well.
# This triggers things like file renaming
document.save()
self._write(
self.unmodified_original
if self.unmodified_original is not None
else self.working_copy,
document.source_path,
)
if document.root_document_id:
document_updated.send(
sender=self.__class__,
document=document.root_document,
self._write(
thumbnail,
document.thumbnail_path,
)
if archive_path and Path(archive_path).is_file():
generated_archive_filename = generate_unique_filename(
document,
archive_filename=True,
)
if (
len(str(generated_archive_filename))
> Document.MAX_STORED_FILENAME_LENGTH
):
self.log.warning(
"Generated archive filename exceeds db path limit, falling back to default naming",
)
generated_archive_filename = generate_filename(
document,
archive_filename=True,
use_format=False,
)
document.archive_filename = generated_archive_filename
create_source_path_directory(document.archive_path)
self._write(
archive_path,
document.archive_path,
)
document.archive_checksum = compute_checksum(
document.archive_path,
)
# Don't save with the lock active. Saving will cause the file
# renaming logic to acquire the lock as well.
# This triggers things like file renaming
document.save()
if document.root_document_id:
document_updated.send(
sender=self.__class__,
document=document.root_document,
)
# Delete the file only if it was successfully consumed
self.log.debug(
f"Deleting original file {self.input_doc.original_file}",
)
self.input_doc.original_file.unlink()
self.log.debug(f"Deleting working copy {self.working_copy}")
self.working_copy.unlink()
if self.unmodified_original is not None: # pragma: no cover
self.log.debug(
f"Deleting unmodified original file {self.unmodified_original}",
)
self.unmodified_original.unlink()
# https://github.com/jonaswinkler/paperless-ng/discussions/1037
shadow_file = (
Path(self.input_doc.original_file).parent
/ f"._{Path(self.input_doc.original_file).name}"
)
if Path(shadow_file).is_file():
self.log.debug(f"Deleting shadow file {shadow_file}")
Path(shadow_file).unlink()
except Exception as e:
self._fail(
str(e),
f"The following error occurred while storing document "
f"{self.filename} after parsing: {e}",
exc_info=True,
exception=e,
)
# Delete the file only if it was successfully consumed
self.log.debug(f"Deleting original file {self.input_doc.original_file}")
self.input_doc.original_file.unlink()
self.log.debug(f"Deleting working copy {self.working_copy}")
self.working_copy.unlink()
if self.unmodified_original is not None: # pragma: no cover
self.log.debug(
f"Deleting unmodified original file {self.unmodified_original}",
)
self.unmodified_original.unlink()
# https://github.com/jonaswinkler/paperless-ng/discussions/1037
shadow_file = (
Path(self.input_doc.original_file).parent
/ f"._{Path(self.input_doc.original_file).name}"
)
if Path(shadow_file).is_file():
self.log.debug(f"Deleting shadow file {shadow_file}")
Path(shadow_file).unlink()
except Exception as e:
self._fail(
str(e),
f"The following error occurred while storing document "
f"{self.filename} after parsing: {e}",
exc_info=True,
exception=e,
)
finally:
document_parser.cleanup()
tempdir.cleanup()
self.run_post_consume_script(document)
self.log.info(f"Document {document} consumption finished")
@@ -800,7 +774,7 @@ class ConsumerPlugin(
title=title[:127],
content=text,
mime_type=mime_type,
checksum=hashlib.md5(file_for_checksum.read_bytes()).hexdigest(),
checksum=compute_checksum(file_for_checksum),
created=create_date,
modified=create_date,
page_count=page_count,
@@ -848,7 +822,7 @@ class ConsumerPlugin(
self.metadata.view_users is not None
or self.metadata.view_groups is not None
or self.metadata.change_users is not None
or self.metadata.change_users is not None
or self.metadata.change_groups is not None
):
permissions = {
"view": {
@@ -881,7 +855,7 @@ class ConsumerPlugin(
Path(source).open("rb") as read_file,
Path(target).open("wb") as write_file,
):
write_file.write(read_file.read())
shutil.copyfileobj(read_file, write_file)
# Attempt to copy file's original stats, but it's ok if we can't
try:
@@ -917,10 +891,9 @@ class ConsumerPreflightPlugin(
def pre_check_duplicate(self) -> None:
"""
Using the MD5 of the file, check this exact file doesn't already exist
Using the SHA256 of the file, check this exact file doesn't already exist
"""
with Path(self.input_doc.original_file).open("rb") as f:
checksum = hashlib.md5(f.read()).hexdigest()
checksum = compute_checksum(Path(self.input_doc.original_file))
existing_doc = Document.global_objects.filter(
Q(checksum=checksum) | Q(archive_checksum=checksum),
)

View File

@@ -477,7 +477,14 @@ class DelayedFullTextQuery(DelayedQuery):
try:
corrected = self.searcher.correct_query(q, q_str)
if corrected.string != q_str:
suggested_correction = corrected.string
corrected_results = self.searcher.search(
corrected.query,
limit=1,
filter=MappedDocIdSet(self.filter_queryset, self.searcher.ixreader),
scored=False,
)
if len(corrected_results) > 0:
suggested_correction = corrected.string
except Exception as e:
logger.info(
"Error while correcting query %s: %s",

View File

@@ -1,13 +1,32 @@
from django.core.management.base import BaseCommand
from __future__ import annotations
import time
from documents.management.commands.base import PaperlessCommand
from documents.tasks import train_classifier
class Command(BaseCommand):
class Command(PaperlessCommand):
help = (
"Trains the classifier on your data and saves the resulting models to a "
"file. The document consumer will then automatically use this new model."
)
supports_progress_bar = False
supports_multiprocessing = False
def handle(self, *args, **options):
train_classifier(scheduled=False)
def handle(self, *args, **options) -> None:
start = time.monotonic()
with (
self.buffered_logging("paperless.tasks"),
self.buffered_logging("paperless.classifier"),
):
train_classifier(
scheduled=False,
status_callback=lambda msg: self.console.print(f" {msg}"),
)
elapsed = time.monotonic() - start
self.console.print(
f"[green]✓[/green] Classifier training complete ({elapsed:.1f}s)",
)

View File

@@ -56,6 +56,7 @@ from documents.models import WorkflowTrigger
from documents.settings import EXPORTER_ARCHIVE_NAME
from documents.settings import EXPORTER_FILE_NAME
from documents.settings import EXPORTER_THUMBNAIL_NAME
from documents.utils import compute_checksum
from documents.utils import copy_file_with_basic_stats
from paperless import version
from paperless.models import ApplicationConfiguration
@@ -693,7 +694,7 @@ class Command(CryptMixin, PaperlessCommand):
source_stat = source.stat()
target_stat = target.stat()
if self.compare_checksums and source_checksum:
target_checksum = hashlib.md5(target.read_bytes()).hexdigest()
target_checksum = compute_checksum(target)
perform_copy = target_checksum != source_checksum
elif (
source_stat.st_mtime != target_stat.st_mtime

View File

@@ -8,6 +8,7 @@ from pathlib import Path
from zipfile import ZipFile
from zipfile import is_zipfile
import ijson
from django.conf import settings
from django.contrib.auth.models import Permission
from django.contrib.auth.models import User
@@ -32,7 +33,6 @@ from documents.models import Document
from documents.models import DocumentType
from documents.models import Note
from documents.models import Tag
from documents.parsers import run_convert
from documents.settings import EXPORTER_ARCHIVE_NAME
from documents.settings import EXPORTER_CRYPTO_SETTINGS_NAME
from documents.settings import EXPORTER_FILE_NAME
@@ -46,6 +46,15 @@ if settings.AUDIT_LOG_ENABLED:
from auditlog.registry import auditlog
def iter_manifest_records(path: Path) -> Generator[dict, None, None]:
"""Yield records one at a time from a manifest JSON array via ijson."""
try:
with path.open("rb") as f:
yield from ijson.items(f, "item")
except ijson.JSONError as e:
raise CommandError(f"Failed to parse manifest file {path}: {e}") from e
@contextmanager
def disable_signal(sig, receiver, sender, *, weak: bool | None = None) -> Generator:
try:
@@ -143,14 +152,9 @@ class Command(CryptMixin, PaperlessCommand):
Loads manifest data from the various JSON files for parsing and loading the database
"""
main_manifest_path: Path = self.source / "manifest.json"
with main_manifest_path.open() as infile:
self.manifest = json.load(infile)
self.manifest_paths.append(main_manifest_path)
for file in Path(self.source).glob("**/*-manifest.json"):
with file.open() as infile:
self.manifest += json.load(infile)
self.manifest_paths.append(file)
def load_metadata(self) -> None:
@@ -201,7 +205,7 @@ class Command(CryptMixin, PaperlessCommand):
ContentType.objects.all().delete()
Permission.objects.all().delete()
for manifest_path in self.manifest_paths:
call_command("loaddata", manifest_path)
call_command("loaddata", manifest_path, skip_checks=True)
except (FieldDoesNotExist, DeserializationError, IntegrityError) as e:
self.stdout.write(self.style.ERROR("Database import failed"))
if (
@@ -231,7 +235,6 @@ class Command(CryptMixin, PaperlessCommand):
self.version: str | None = None
self.salt: str | None = None
self.manifest_paths = []
self.manifest = []
# Create a temporary directory for extracting a zip file into it, even if supplied source is no zip file to keep code cleaner.
with tempfile.TemporaryDirectory() as tmp_dir:
@@ -291,6 +294,9 @@ class Command(CryptMixin, PaperlessCommand):
else:
self.stdout.write(self.style.NOTICE("Data only import completed"))
for tmp in getattr(self, "_decrypted_tmp_paths", []):
tmp.unlink(missing_ok=True)
self.stdout.write("Updating search index...")
call_command(
"document_index",
@@ -343,11 +349,12 @@ class Command(CryptMixin, PaperlessCommand):
) from e
self.stdout.write("Checking the manifest")
for record in self.manifest:
# Only check if the document files exist if this is not data only
# We don't care about documents for a data only import
if not self.data_only and record["model"] == "documents.document":
check_document_validity(record)
for manifest_path in self.manifest_paths:
for record in iter_manifest_records(manifest_path):
# Only check if the document files exist if this is not data only
# We don't care about documents for a data only import
if not self.data_only and record["model"] == "documents.document":
check_document_validity(record)
def _import_files_from_manifest(self) -> None:
settings.ORIGINALS_DIR.mkdir(parents=True, exist_ok=True)
@@ -356,23 +363,31 @@ class Command(CryptMixin, PaperlessCommand):
self.stdout.write("Copy files into paperless...")
manifest_documents = list(
filter(lambda r: r["model"] == "documents.document", self.manifest),
)
document_records = [
{
"pk": record["pk"],
EXPORTER_FILE_NAME: record[EXPORTER_FILE_NAME],
EXPORTER_THUMBNAIL_NAME: record.get(EXPORTER_THUMBNAIL_NAME),
EXPORTER_ARCHIVE_NAME: record.get(EXPORTER_ARCHIVE_NAME),
}
for manifest_path in self.manifest_paths
for record in iter_manifest_records(manifest_path)
if record["model"] == "documents.document"
]
for record in self.track(manifest_documents, description="Copying files..."):
for record in self.track(document_records, description="Copying files..."):
document = Document.objects.get(pk=record["pk"])
doc_file = record[EXPORTER_FILE_NAME]
document_path = self.source / doc_file
if EXPORTER_THUMBNAIL_NAME in record:
if record[EXPORTER_THUMBNAIL_NAME]:
thumb_file = record[EXPORTER_THUMBNAIL_NAME]
thumbnail_path = (self.source / thumb_file).resolve()
else:
thumbnail_path = None
if EXPORTER_ARCHIVE_NAME in record:
if record[EXPORTER_ARCHIVE_NAME]:
archive_file = record[EXPORTER_ARCHIVE_NAME]
archive_path = self.source / archive_file
else:
@@ -387,22 +402,10 @@ class Command(CryptMixin, PaperlessCommand):
copy_file_with_basic_stats(document_path, document.source_path)
if thumbnail_path:
if thumbnail_path.suffix in {".png", ".PNG"}:
run_convert(
density=300,
scale="500x5000>",
alpha="remove",
strip=True,
trim=False,
auto_orient=True,
input_file=f"{thumbnail_path}[0]",
output_file=str(document.thumbnail_path),
)
else:
copy_file_with_basic_stats(
thumbnail_path,
document.thumbnail_path,
)
copy_file_with_basic_stats(
thumbnail_path,
document.thumbnail_path,
)
if archive_path:
create_source_path_directory(document.archive_path)
@@ -413,33 +416,43 @@ class Command(CryptMixin, PaperlessCommand):
document.save()
def _decrypt_record_if_needed(self, record: dict) -> dict:
fields = self.CRYPT_FIELDS_BY_MODEL.get(record.get("model", ""))
if fields:
for field in fields:
if record["fields"].get(field):
record["fields"][field] = self.decrypt_string(
value=record["fields"][field],
)
return record
def decrypt_secret_fields(self) -> None:
"""
The converse decryption of some fields out of the export before importing to database
The converse decryption of some fields out of the export before importing to database.
Streams records from each manifest path and writes decrypted content to a temp file.
"""
if self.passphrase:
# Salt has been loaded from metadata.json at this point, so it cannot be None
self.setup_crypto(passphrase=self.passphrase, salt=self.salt)
had_at_least_one_record = False
for crypt_config in self.CRYPT_FIELDS:
importer_model: str = crypt_config["model_name"]
crypt_fields: str = crypt_config["fields"]
for record in filter(
lambda x: x["model"] == importer_model,
self.manifest,
):
had_at_least_one_record = True
for field in crypt_fields:
if record["fields"][field]:
record["fields"][field] = self.decrypt_string(
value=record["fields"][field],
)
if had_at_least_one_record:
# It's annoying, but the DB is loaded from the JSON directly
# Maybe could change that in the future?
(self.source / "manifest.json").write_text(
json.dumps(self.manifest, indent=2, ensure_ascii=False),
)
if not self.passphrase:
return
# Salt has been loaded from metadata.json at this point, so it cannot be None
self.setup_crypto(passphrase=self.passphrase, salt=self.salt)
self._decrypted_tmp_paths: list[Path] = []
new_paths: list[Path] = []
for manifest_path in self.manifest_paths:
tmp = manifest_path.with_name(manifest_path.stem + ".decrypted.json")
with tmp.open("w", encoding="utf-8") as out:
out.write("[\n")
first = True
for record in iter_manifest_records(manifest_path):
if not first:
out.write(",\n")
json.dump(
self._decrypt_record_if_needed(record),
out,
indent=2,
ensure_ascii=False,
)
first = False
out.write("\n]\n")
self._decrypted_tmp_paths.append(tmp)
new_paths.append(tmp)
self.manifest_paths = new_paths

View File

@@ -3,14 +3,18 @@ import shutil
from documents.management.commands.base import PaperlessCommand
from documents.models import Document
from documents.parsers import get_parser_class_for_mime_type
from paperless.parsers.registry import get_parser_registry
logger = logging.getLogger("paperless.management.thumbnails")
def _process_document(doc_id: int) -> None:
document: Document = Document.objects.get(id=doc_id)
parser_class = get_parser_class_for_mime_type(document.mime_type)
parser_class = get_parser_registry().get_parser_for_file(
document.mime_type,
document.original_filename or "",
document.source_path,
)
if parser_class is None:
logger.warning(
@@ -20,17 +24,9 @@ def _process_document(doc_id: int) -> None:
)
return
parser = parser_class(logging_group=None)
try:
thumb = parser.get_thumbnail(
document.source_path,
document.mime_type,
document.get_public_filename(),
)
with parser_class() as parser:
thumb = parser.get_thumbnail(document.source_path, document.mime_type)
shutil.move(thumb, document.thumbnail_path)
finally:
parser.cleanup()
class Command(PaperlessCommand):

View File

@@ -1,22 +0,0 @@
import sys
from django.core.management.commands.loaddata import Command as LoadDataCommand
# This class is used to migrate data between databases
# That's difficult to test
class Command(LoadDataCommand): # pragma: no cover
"""
Allow the loading of data from standard in. Sourced originally from:
https://gist.github.com/bmispelon/ad5a2c333443b3a1d051 (MIT licensed)
"""
def parse_name(self, fixture_name):
self.compression_formats["stdin"] = (lambda x, y: sys.stdin, None)
if fixture_name == "-":
return "-", "json", "stdin"
def find_fixtures(self, fixture_label):
if fixture_label == "-":
return [("-", None, "-")]
return super().find_fixtures(fixture_label)

View File

@@ -169,7 +169,7 @@ def match_storage_paths(document: Document, classifier: DocumentClassifier, user
def matches(matching_model: MatchingModel, document: Document):
search_flags = 0
document_content = document.content
document_content = document.get_effective_content() or ""
# Check that match is not empty
if not matching_model.match.strip():

View File

@@ -5,7 +5,7 @@ from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("documents", "0003_workflowaction_order"),
("documents", "0002_squashed"),
]
operations = [

View File

@@ -1,18 +0,0 @@
# Generated by Django 5.2.9 on 2026-01-20 20:06
from django.db import migrations
from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0002_squashed"),
]
operations = [
migrations.AddField(
model_name="workflowaction",
name="order",
field=models.PositiveIntegerField(default=0, verbose_name="order"),
),
]

View File

@@ -6,7 +6,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0004_remove_document_storage_type"),
("documents", "0003_remove_document_storage_type"),
]
operations = [

View File

@@ -6,7 +6,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0005_workflowtrigger_filter_has_any_correspondents_and_more"),
("documents", "0004_workflowtrigger_filter_has_any_correspondents_and_more"),
]
operations = [

View File

@@ -7,7 +7,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0006_alter_document_checksum_unique"),
("documents", "0005_alter_document_checksum_unique"),
]
operations = [

View File

@@ -46,7 +46,7 @@ def revoke_share_link_bundle_permissions(apps, schema_editor):
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
("documents", "0007_document_content_length"),
("documents", "0006_document_content_length"),
]
operations = [

View File

@@ -6,7 +6,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0008_sharelinkbundle"),
("documents", "0007_sharelinkbundle"),
]
operations = [

View File

@@ -7,7 +7,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0009_workflowaction_passwords_alter_workflowaction_type"),
("documents", "0008_workflowaction_passwords_alter_workflowaction_type"),
]
operations = [

View File

@@ -7,7 +7,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0010_alter_document_content_length"),
("documents", "0009_alter_document_content_length"),
]
operations = [

View File

@@ -6,7 +6,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0011_optimize_integer_field_sizes"),
("documents", "0010_optimize_integer_field_sizes"),
]
operations = [

View File

@@ -7,7 +7,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0012_alter_workflowaction_type"),
("documents", "0011_alter_workflowaction_type"),
]
operations = [

View File

@@ -6,7 +6,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0013_document_root_document"),
("documents", "0012_document_root_document"),
]
operations = [

View File

@@ -124,7 +124,7 @@ def _restore_visibility_fields(apps, schema_editor):
class Migration(migrations.Migration):
dependencies = [
("documents", "0014_alter_paperlesstask_task_name"),
("documents", "0013_alter_paperlesstask_task_name"),
]
operations = [

View File

@@ -7,7 +7,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0015_savedview_visibility_to_ui_settings"),
("documents", "0014_savedview_visibility_to_ui_settings"),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]

View File

@@ -0,0 +1,130 @@
import hashlib
import logging
from pathlib import Path
from django.conf import settings
from django.db import migrations
from django.db import models
logger = logging.getLogger("paperless.migrations")
_CHUNK_SIZE = 65536 # 64 KiB — avoids loading entire files into memory
_BATCH_SIZE = 500 # documents per bulk_update call
_PROGRESS_INTERVAL = 500 # log a progress line every N documents
def _sha256(path: Path) -> str:
h = hashlib.sha256()
with path.open("rb") as fh:
while chunk := fh.read(_CHUNK_SIZE):
h.update(chunk)
return h.hexdigest()
def recompute_checksums(apps, schema_editor):
"""Recompute all document checksums from MD5 to SHA256."""
Document = apps.get_model("documents", "Document")
total = Document.objects.count()
if total == 0:
return
logger.info("Recomputing SHA-256 checksums for %d document(s)...", total)
batch: list = []
processed = 0
for doc in Document.objects.only(
"pk",
"filename",
"checksum",
"archive_filename",
"archive_checksum",
).iterator(chunk_size=_BATCH_SIZE):
updated_fields: list[str] = []
# Reconstruct source path the same way Document.source_path does
fname = str(doc.filename) if doc.filename else f"{doc.pk:07}.pdf"
source_path = (settings.ORIGINALS_DIR / Path(fname)).resolve()
if source_path.exists():
doc.checksum = _sha256(source_path)
updated_fields.append("checksum")
else:
logger.warning(
"Document %s: original file %s not found, checksum not updated.",
doc.pk,
source_path,
)
# Mirror Document.has_archive_version: archive_filename is not None
if doc.archive_filename is not None:
archive_path = (
settings.ARCHIVE_DIR / Path(str(doc.archive_filename))
).resolve()
if archive_path.exists():
doc.archive_checksum = _sha256(archive_path)
updated_fields.append("archive_checksum")
else:
logger.warning(
"Document %s: archive file %s not found, checksum not updated.",
doc.pk,
archive_path,
)
if updated_fields:
batch.append(doc)
processed += 1
if len(batch) >= _BATCH_SIZE:
Document.objects.bulk_update(batch, ["checksum", "archive_checksum"])
batch.clear()
if processed % _PROGRESS_INTERVAL == 0:
logger.info(
"SHA-256 checksum progress: %d/%d (%d%%)",
processed,
total,
processed * 100 // total,
)
if batch:
Document.objects.bulk_update(batch, ["checksum", "archive_checksum"])
logger.info(
"SHA-256 checksum recomputation complete: %d document(s) processed.",
total,
)
class Migration(migrations.Migration):
dependencies = [
("documents", "0015_document_version_index_and_more"),
]
operations = [
migrations.AlterField(
model_name="document",
name="checksum",
field=models.CharField(
editable=False,
help_text="The checksum of the original document.",
max_length=64,
verbose_name="checksum",
),
),
migrations.AlterField(
model_name="document",
name="archive_checksum",
field=models.CharField(
blank=True,
editable=False,
help_text="The checksum of the archived document.",
max_length=64,
null=True,
verbose_name="archive checksum",
),
),
migrations.RunPython(recompute_checksums, migrations.RunPython.noop),
]

View File

@@ -216,14 +216,14 @@ class Document(SoftDeleteModel, ModelWithOwner): # type: ignore[django-manager-
checksum = models.CharField(
_("checksum"),
max_length=32,
max_length=64,
editable=False,
help_text=_("The checksum of the original document."),
)
archive_checksum = models.CharField(
_("archive checksum"),
max_length=32,
max_length=64,
editable=False,
blank=True,
null=True,
@@ -361,6 +361,42 @@ class Document(SoftDeleteModel, ModelWithOwner): # type: ignore[django-manager-
res += f" {self.title}"
return res
def get_effective_content(self) -> str | None:
"""
Returns the effective content for the document.
For root documents, this is the latest version's content when available.
For version documents, this is always the document's own content.
If the queryset already annotated ``effective_content``, that value is used.
"""
if hasattr(self, "effective_content"):
return getattr(self, "effective_content")
if self.root_document_id is not None or self.pk is None:
return self.content
prefetched_cache = getattr(self, "_prefetched_objects_cache", None)
prefetched_versions = (
prefetched_cache.get("versions")
if isinstance(prefetched_cache, dict)
else None
)
if prefetched_versions:
latest_prefetched = max(prefetched_versions, key=lambda doc: doc.id)
return latest_prefetched.content
latest_version_content = (
Document.objects.filter(root_document=self)
.order_by("-id")
.values_list("content", flat=True)
.first()
)
return (
latest_version_content
if latest_version_content is not None
else self.content
)
@property
def suggestion_content(self):
"""
@@ -373,15 +409,21 @@ class Document(SoftDeleteModel, ModelWithOwner): # type: ignore[django-manager-
This improves processing speed for large documents while keeping
enough context for accurate suggestions.
"""
if not self.content or len(self.content) <= 1200000:
return self.content
effective_content = self.get_effective_content()
if not effective_content or len(effective_content) <= 1200000:
return effective_content
else:
# Use 80% from the start and 20% from the end
# to preserve both opening and closing context.
head_len = 800000
tail_len = 200000
return " ".join((self.content[:head_len], self.content[-tail_len:]))
return " ".join(
(
effective_content[:head_len],
effective_content[-tail_len:],
),
)
@property
def source_path(self) -> Path:

View File

@@ -3,84 +3,47 @@ from __future__ import annotations
import logging
import mimetypes
import os
import re
import shutil
import subprocess
import tempfile
from functools import lru_cache
from pathlib import Path
from typing import TYPE_CHECKING
from django.conf import settings
from documents.loggers import LoggingMixin
from documents.signals import document_consumer_declaration
from documents.utils import copy_file_with_basic_stats
from documents.utils import run_subprocess
from paperless.parsers.registry import get_parser_registry
if TYPE_CHECKING:
import datetime
# This regular expression will try to find dates in the document at
# hand and will match the following formats:
# - XX.YY.ZZZZ with XX + YY being 1 or 2 and ZZZZ being 2 or 4 digits
# - XX/YY/ZZZZ with XX + YY being 1 or 2 and ZZZZ being 2 or 4 digits
# - XX-YY-ZZZZ with XX + YY being 1 or 2 and ZZZZ being 2 or 4 digits
# - ZZZZ.XX.YY with XX + YY being 1 or 2 and ZZZZ being 2 or 4 digits
# - ZZZZ/XX/YY with XX + YY being 1 or 2 and ZZZZ being 2 or 4 digits
# - ZZZZ-XX-YY with XX + YY being 1 or 2 and ZZZZ being 2 or 4 digits
# - XX. MONTH ZZZZ with XX being 1 or 2 and ZZZZ being 2 or 4 digits
# - MONTH ZZZZ, with ZZZZ being 4 digits
# - MONTH XX, ZZZZ with XX being 1 or 2 and ZZZZ being 4 digits
# - XX MON ZZZZ with XX being 1 or 2 and ZZZZ being 4 digits. MONTH is 3 letters
# - XXPP MONTH ZZZZ with XX being 1 or 2 and PP being 2 letters and ZZZZ being 4 digits
# TODO: isn't there a date parsing library for this?
DATE_REGEX = re.compile(
r"(\b|(?!=([_-])))(\d{1,2})[\.\/-](\d{1,2})[\.\/-](\d{4}|\d{2})(\b|(?=([_-])))|"
r"(\b|(?!=([_-])))(\d{4}|\d{2})[\.\/-](\d{1,2})[\.\/-](\d{1,2})(\b|(?=([_-])))|"
r"(\b|(?!=([_-])))(\d{1,2}[\. ]+[a-zéûäëčžúřěáíóńźçŞğü]{3,9} \d{4}|[a-zéûäëčžúřěáíóńźçŞğü]{3,9} \d{1,2}, \d{4})(\b|(?=([_-])))|"
r"(\b|(?!=([_-])))([^\W\d_]{3,9} \d{1,2}, (\d{4}))(\b|(?=([_-])))|"
r"(\b|(?!=([_-])))([^\W\d_]{3,9} \d{4})(\b|(?=([_-])))|"
r"(\b|(?!=([_-])))(\d{1,2}[^ 0-9]{2}[\. ]+[^ ]{3,9}[ \.\/-]\d{4})(\b|(?=([_-])))|"
r"(\b|(?!=([_-])))(\b\d{1,2}[ \.\/-][a-zéûäëčžúřěáíóńźçŞğü]{3}[ \.\/-]\d{4})(\b|(?=([_-])))",
re.IGNORECASE,
)
logger = logging.getLogger("paperless.parsing")
@lru_cache(maxsize=8)
def is_mime_type_supported(mime_type: str) -> bool:
"""
Returns True if the mime type is supported, False otherwise
"""
return get_parser_class_for_mime_type(mime_type) is not None
return get_parser_registry().get_parser_for_file(mime_type, "") is not None
@lru_cache(maxsize=8)
def get_default_file_extension(mime_type: str) -> str:
"""
Returns the default file extension for a mimetype, or
an empty string if it could not be determined
"""
for response in document_consumer_declaration.send(None):
parser_declaration = response[1]
supported_mime_types = parser_declaration["mime_types"]
if mime_type in supported_mime_types:
return supported_mime_types[mime_type]
parser_class = get_parser_registry().get_parser_for_file(mime_type, "")
if parser_class is not None:
supported = parser_class.supported_mime_types()
if mime_type in supported:
return supported[mime_type]
ext = mimetypes.guess_extension(mime_type)
if ext:
return ext
else:
return ""
return ext if ext else ""
@lru_cache(maxsize=8)
def is_file_ext_supported(ext: str) -> bool:
"""
Returns True if the file extension is supported, False otherwise
@@ -94,44 +57,17 @@ def is_file_ext_supported(ext: str) -> bool:
def get_supported_file_extensions() -> set[str]:
extensions = set()
for response in document_consumer_declaration.send(None):
parser_declaration = response[1]
supported_mime_types = parser_declaration["mime_types"]
for mime_type in supported_mime_types:
for parser_class in get_parser_registry().all_parsers():
for mime_type, ext in parser_class.supported_mime_types().items():
extensions.update(mimetypes.guess_all_extensions(mime_type))
# Python's stdlib might be behind, so also add what the parser
# says is the default extension
# This makes image/webp supported on Python < 3.11
extensions.add(supported_mime_types[mime_type])
extensions.add(ext)
return extensions
def get_parser_class_for_mime_type(mime_type: str) -> type[DocumentParser] | None:
"""
Returns the best parser (by weight) for the given mimetype or
None if no parser exists
"""
options = []
for response in document_consumer_declaration.send(None):
parser_declaration = response[1]
supported_mime_types = parser_declaration["mime_types"]
if mime_type in supported_mime_types:
options.append(parser_declaration)
if not options:
return None
best_parser = sorted(options, key=lambda _: _["weight"], reverse=True)[0]
# Return the parser with the highest weight.
return best_parser["parser"]
def run_convert(
input_file,
output_file,

View File

@@ -11,7 +11,6 @@ is an identity function that adds no overhead.
from __future__ import annotations
import hashlib
import logging
import uuid
from collections import defaultdict
@@ -30,6 +29,7 @@ from django.utils import timezone
from documents.models import Document
from documents.models import PaperlessTask
from documents.utils import compute_checksum
from paperless.config import GeneralConfig
logger = logging.getLogger("paperless.sanity_checker")
@@ -218,7 +218,7 @@ def _check_original(
present_files.discard(source_path)
try:
checksum = hashlib.md5(source_path.read_bytes()).hexdigest()
checksum = compute_checksum(source_path)
except OSError as e:
messages.error(doc.pk, f"Cannot read original file of document: {e}")
else:
@@ -255,7 +255,7 @@ def _check_archive(
present_files.discard(archive_path)
try:
checksum = hashlib.md5(archive_path.read_bytes()).hexdigest()
checksum = compute_checksum(archive_path)
except OSError as e:
messages.error(
doc.pk,

View File

@@ -703,15 +703,6 @@ class StoragePathField(serializers.PrimaryKeyRelatedField):
class CustomFieldSerializer(serializers.ModelSerializer):
def __init__(self, *args, **kwargs):
context = kwargs.get("context")
self.api_version = int(
context.get("request").version
if context and context.get("request")
else settings.REST_FRAMEWORK["DEFAULT_VERSION"],
)
super().__init__(*args, **kwargs)
data_type = serializers.ChoiceField(
choices=CustomField.FieldDataType,
read_only=False,
@@ -791,38 +782,6 @@ class CustomFieldSerializer(serializers.ModelSerializer):
)
return super().validate(attrs)
def to_internal_value(self, data):
ret = super().to_internal_value(data)
if (
self.api_version < 7
and ret.get("data_type", "") == CustomField.FieldDataType.SELECT
and isinstance(ret.get("extra_data", {}).get("select_options"), list)
):
ret["extra_data"]["select_options"] = [
{
"label": option,
"id": get_random_string(length=16),
}
for option in ret["extra_data"]["select_options"]
]
return ret
def to_representation(self, instance):
ret = super().to_representation(instance)
if (
self.api_version < 7
and instance.data_type == CustomField.FieldDataType.SELECT
):
# Convert the select options with ids to a list of strings
ret["extra_data"]["select_options"] = [
option["label"] for option in ret["extra_data"]["select_options"]
]
return ret
class ReadWriteSerializerMethodField(serializers.SerializerMethodField):
"""
@@ -838,6 +797,25 @@ class ReadWriteSerializerMethodField(serializers.SerializerMethodField):
return {self.field_name: data}
def validate_documentlink_targets(user, doc_ids):
if Document.objects.filter(id__in=doc_ids).count() != len(doc_ids):
raise serializers.ValidationError(
"Some documents in value don't exist or were specified twice.",
)
if user is None:
return
target_documents = Document.objects.filter(id__in=doc_ids).select_related("owner")
if not all(
has_perms_owner_aware(user, "change_document", document)
for document in target_documents
):
raise PermissionDenied(
_("Insufficient permissions."),
)
class CustomFieldInstanceSerializer(serializers.ModelSerializer):
field = serializers.PrimaryKeyRelatedField(queryset=CustomField.objects.all())
value = ReadWriteSerializerMethodField(allow_null=True)
@@ -928,59 +906,14 @@ class CustomFieldInstanceSerializer(serializers.ModelSerializer):
"Value must be a list",
)
doc_ids = data["value"]
if Document.objects.filter(id__in=doc_ids).count() != len(
data["value"],
):
raise serializers.ValidationError(
"Some documents in value don't exist or were specified twice.",
)
request = self.context.get("request")
validate_documentlink_targets(
getattr(request, "user", None) if request is not None else None,
doc_ids,
)
return data
def get_api_version(self):
return int(
self.context.get("request").version
if self.context.get("request")
else settings.REST_FRAMEWORK["DEFAULT_VERSION"],
)
def to_internal_value(self, data):
ret = super().to_internal_value(data)
if (
self.get_api_version() < 7
and ret.get("field").data_type == CustomField.FieldDataType.SELECT
and ret.get("value") is not None
):
# Convert the index of the option in the field.extra_data["select_options"]
# list to the options unique id
ret["value"] = ret.get("field").extra_data["select_options"][ret["value"]][
"id"
]
return ret
def to_representation(self, instance):
ret = super().to_representation(instance)
if (
self.get_api_version() < 7
and instance.field.data_type == CustomField.FieldDataType.SELECT
):
# return the index of the option in the field.extra_data["select_options"] list
ret["value"] = next(
(
idx
for idx, option in enumerate(
instance.field.extra_data["select_options"],
)
if option["id"] == instance.value
),
None,
)
return ret
class Meta:
model = CustomFieldInstance
fields = [
@@ -1004,20 +937,6 @@ class NotesSerializer(serializers.ModelSerializer):
fields = ["id", "note", "created", "user"]
ordering = ["-created"]
def to_representation(self, instance):
ret = super().to_representation(instance)
request = self.context.get("request")
api_version = int(
request.version if request else settings.REST_FRAMEWORK["DEFAULT_VERSION"],
)
if api_version < 8 and "user" in ret:
user_id = ret["user"]["id"]
ret["user"] = user_id
return ret
def _get_viewable_duplicates(
document: Document,
@@ -1172,22 +1091,6 @@ class DocumentSerializer(
doc["content"] = getattr(instance, "effective_content") or ""
if self.truncate_content and "content" in self.fields:
doc["content"] = doc.get("content")[0:550]
request = self.context.get("request")
api_version = int(
request.version if request else settings.REST_FRAMEWORK["DEFAULT_VERSION"],
)
if api_version < 9 and "created" in self.fields:
# provide created as a datetime for backwards compatibility
from django.utils import timezone
doc["created"] = timezone.make_aware(
datetime.combine(
instance.created,
datetime.min.time(),
),
).isoformat()
return doc
def to_internal_value(self, data):
@@ -1655,11 +1558,124 @@ class DocumentListSerializer(serializers.Serializer):
return documents
class SourceModeValidationMixin:
def validate_source_mode(self, source_mode: str) -> str:
if source_mode not in bulk_edit.SourceModeChoices.__dict__.values():
raise serializers.ValidationError("Invalid source_mode")
return source_mode
class RotateDocumentsSerializer(DocumentListSerializer, SourceModeValidationMixin):
degrees = serializers.IntegerField(required=True)
source_mode = serializers.CharField(
required=False,
default=bulk_edit.SourceModeChoices.LATEST_VERSION,
)
class MergeDocumentsSerializer(DocumentListSerializer, SourceModeValidationMixin):
metadata_document_id = serializers.IntegerField(
required=False,
allow_null=True,
)
delete_originals = serializers.BooleanField(required=False, default=False)
archive_fallback = serializers.BooleanField(required=False, default=False)
source_mode = serializers.CharField(
required=False,
default=bulk_edit.SourceModeChoices.LATEST_VERSION,
)
class EditPdfDocumentsSerializer(DocumentListSerializer, SourceModeValidationMixin):
operations = serializers.ListField(required=True)
delete_original = serializers.BooleanField(required=False, default=False)
update_document = serializers.BooleanField(required=False, default=False)
include_metadata = serializers.BooleanField(required=False, default=True)
source_mode = serializers.CharField(
required=False,
default=bulk_edit.SourceModeChoices.LATEST_VERSION,
)
def validate(self, attrs):
documents = attrs["documents"]
if len(documents) > 1:
raise serializers.ValidationError(
"Edit PDF method only supports one document",
)
operations = attrs["operations"]
if not isinstance(operations, list):
raise serializers.ValidationError("operations must be a list")
for op in operations:
if not isinstance(op, dict):
raise serializers.ValidationError("invalid operation entry")
if "page" not in op or not isinstance(op["page"], int):
raise serializers.ValidationError("page must be an integer")
if "rotate" in op and not isinstance(op["rotate"], int):
raise serializers.ValidationError("rotate must be an integer")
if "doc" in op and not isinstance(op["doc"], int):
raise serializers.ValidationError("doc must be an integer")
if attrs["update_document"]:
max_idx = max(op.get("doc", 0) for op in operations)
if max_idx > 0:
raise serializers.ValidationError(
"update_document only allowed with a single output document",
)
doc = Document.objects.get(id=documents[0])
if doc.page_count:
for op in operations:
if op["page"] < 1 or op["page"] > doc.page_count:
raise serializers.ValidationError(
f"Page {op['page']} is out of bounds for document with {doc.page_count} pages.",
)
return attrs
class RemovePasswordDocumentsSerializer(
DocumentListSerializer,
SourceModeValidationMixin,
):
password = serializers.CharField(required=True)
update_document = serializers.BooleanField(required=False, default=False)
delete_original = serializers.BooleanField(required=False, default=False)
include_metadata = serializers.BooleanField(required=False, default=True)
source_mode = serializers.CharField(
required=False,
default=bulk_edit.SourceModeChoices.LATEST_VERSION,
)
class DeleteDocumentsSerializer(DocumentListSerializer):
pass
class ReprocessDocumentsSerializer(DocumentListSerializer):
pass
class BulkEditSerializer(
SerializerWithPerms,
DocumentListSerializer,
SetPermissionsMixin,
SourceModeValidationMixin,
):
# TODO: remove this and related backwards compatibility code when API v9 is dropped
# split, delete_pages can be removed entirely
MOVED_DOCUMENT_ACTION_ENDPOINTS = {
"delete": "/api/documents/delete/",
"reprocess": "/api/documents/reprocess/",
"rotate": "/api/documents/rotate/",
"merge": "/api/documents/merge/",
"edit_pdf": "/api/documents/edit_pdf/",
"remove_password": "/api/documents/remove_password/",
"split": "/api/documents/edit_pdf/",
"delete_pages": "/api/documents/edit_pdf/",
}
LEGACY_DOCUMENT_ACTION_METHODS = tuple(MOVED_DOCUMENT_ACTION_ENDPOINTS.keys())
method = serializers.ChoiceField(
choices=[
"set_correspondent",
@@ -1669,15 +1685,8 @@ class BulkEditSerializer(
"remove_tag",
"modify_tags",
"modify_custom_fields",
"delete",
"reprocess",
"set_permissions",
"rotate",
"merge",
"split",
"delete_pages",
"edit_pdf",
"remove_password",
*LEGACY_DOCUMENT_ACTION_METHODS,
],
label="Method",
write_only=True,
@@ -1722,6 +1731,19 @@ class BulkEditSerializer(
f"Some custom fields in {name} don't exist or were specified twice.",
)
if isinstance(custom_fields, dict):
custom_field_map = CustomField.objects.in_bulk(ids)
for raw_field_id, value in custom_fields.items():
field = custom_field_map.get(int(raw_field_id))
if (
field is not None
and field.data_type == CustomField.FieldDataType.DOCUMENTLINK
and value is not None
):
if not isinstance(value, list):
raise serializers.ValidationError("Value must be a list")
validate_documentlink_targets(self.user, value)
def validate_method(self, method):
if method == "set_correspondent":
return bulk_edit.set_correspondent
@@ -1755,8 +1777,7 @@ class BulkEditSerializer(
return bulk_edit.edit_pdf
elif method == "remove_password":
return bulk_edit.remove_password
else: # pragma: no cover
# This will never happen as it is handled by the ChoiceField
else:
raise serializers.ValidationError("Unsupported method.")
def _validate_parameters_tags(self, parameters) -> None:
@@ -1866,9 +1887,7 @@ class BulkEditSerializer(
"source_mode",
bulk_edit.SourceModeChoices.LATEST_VERSION,
)
if source_mode not in bulk_edit.SourceModeChoices.__dict__.values():
raise serializers.ValidationError("Invalid source_mode")
parameters["source_mode"] = source_mode
parameters["source_mode"] = self.validate_source_mode(source_mode)
def _validate_parameters_split(self, parameters) -> None:
if "pages" not in parameters:

View File

@@ -2,5 +2,4 @@ from django.dispatch import Signal
document_consumption_started = Signal()
document_consumption_finished = Signal()
document_consumer_declaration = Signal()
document_updated = Signal()

View File

@@ -1,5 +1,6 @@
from __future__ import annotations
import hashlib
import logging
import shutil
from pathlib import Path
@@ -403,6 +404,14 @@ class CannotMoveFilesException(Exception):
pass
def _path_matches_checksum(path: Path, checksum: str | None) -> bool:
if checksum is None or not path.is_file():
return False
with path.open("rb") as f:
return hashlib.md5(f.read()).hexdigest() == checksum
def _filename_template_uses_custom_fields(doc: Document) -> bool:
template = None
if doc.storage_path is not None:
@@ -473,10 +482,12 @@ def update_filename_and_move_files(
old_filename = instance.filename
old_source_path = instance.source_path
move_original = False
original_already_moved = False
old_archive_filename = instance.archive_filename
old_archive_path = instance.archive_path
move_archive = False
archive_already_moved = False
candidate_filename = generate_filename(instance)
if len(str(candidate_filename)) > Document.MAX_STORED_FILENAME_LENGTH:
@@ -497,14 +508,23 @@ def update_filename_and_move_files(
candidate_source_path.exists()
and candidate_source_path != old_source_path
):
# Only fall back to unique search when there is an actual conflict
new_filename = generate_unique_filename(instance)
if not old_source_path.is_file() and _path_matches_checksum(
candidate_source_path,
instance.checksum,
):
new_filename = candidate_filename
original_already_moved = True
else:
# Only fall back to unique search when there is an actual conflict
new_filename = generate_unique_filename(instance)
else:
new_filename = candidate_filename
# Need to convert to string to be able to save it to the db
instance.filename = str(new_filename)
move_original = old_filename != instance.filename
move_original = (
old_filename != instance.filename and not original_already_moved
)
if instance.has_archive_version:
archive_candidate = generate_filename(instance, archive_filename=True)
@@ -525,24 +545,38 @@ def update_filename_and_move_files(
archive_candidate_path.exists()
and archive_candidate_path != old_archive_path
):
new_archive_filename = generate_unique_filename(
instance,
archive_filename=True,
)
if not old_archive_path.is_file() and _path_matches_checksum(
archive_candidate_path,
instance.archive_checksum,
):
new_archive_filename = archive_candidate
archive_already_moved = True
else:
new_archive_filename = generate_unique_filename(
instance,
archive_filename=True,
)
else:
new_archive_filename = archive_candidate
instance.archive_filename = str(new_archive_filename)
move_archive = old_archive_filename != instance.archive_filename
move_archive = (
old_archive_filename != instance.archive_filename
and not archive_already_moved
)
else:
move_archive = False
if not move_original and not move_archive:
# Just update modified. Also, don't save() here to prevent infinite recursion.
Document.objects.filter(pk=instance.pk).update(
modified=timezone.now(),
)
updates = {"modified": timezone.now()}
if old_filename != instance.filename:
updates["filename"] = instance.filename
if old_archive_filename != instance.archive_filename:
updates["archive_filename"] = instance.archive_filename
# Don't save() here to prevent infinite recursion.
Document.objects.filter(pk=instance.pk).update(**updates)
return
if move_original:
@@ -932,8 +966,25 @@ def run_workflows(
if not use_overrides:
# limit title to 128 characters
document.title = document.title[:128]
# save first before setting tags
document.save()
# Save only the fields that workflow actions can set directly.
# Deliberately excludes filename and archive_filename — those are
# managed exclusively by update_filename_and_move_files via the
# post_save signal. Writing stale in-memory values here would revert
# a concurrent update_filename_and_move_files DB write, leaving the
# DB pointing at the old path while the file is already at the new
# one (see: https://github.com/paperless-ngx/paperless-ngx/issues/12386).
# modified has auto_now=True but is not auto-added when update_fields
# is specified, so it must be listed explicitly.
document.save(
update_fields=[
"title",
"correspondent",
"document_type",
"storage_path",
"owner",
"modified",
],
)
document.tags.set(doc_tag_ids)
WorkflowRun.objects.create(

View File

@@ -1,5 +1,4 @@
import datetime
import hashlib
import logging
import shutil
import uuid
@@ -52,19 +51,20 @@ from documents.models import StoragePath
from documents.models import Tag
from documents.models import WorkflowRun
from documents.models import WorkflowTrigger
from documents.parsers import DocumentParser
from documents.parsers import get_parser_class_for_mime_type
from documents.plugins.base import ConsumeTaskPlugin
from documents.plugins.base import ProgressManager
from documents.plugins.base import StopConsumeTaskError
from documents.plugins.helpers import ProgressManager
from documents.plugins.helpers import ProgressStatusOptions
from documents.sanity_checker import SanityCheckFailedException
from documents.signals import document_updated
from documents.signals.handlers import cleanup_document_deletion
from documents.signals.handlers import run_workflows
from documents.signals.handlers import send_websocket_document_updated
from documents.utils import compute_checksum
from documents.workflows.utils import get_workflows_for_trigger
from paperless.config import AIConfig
from paperless.parsers import ParserContext
from paperless.parsers.registry import get_parser_registry
from paperless_ai.indexing import llm_index_add_or_update_document
from paperless_ai.indexing import llm_index_remove_document
from paperless_ai.indexing import update_llm_index
@@ -100,7 +100,11 @@ def index_reindex(*, iter_wrapper: IterWrapper[Document] = _identity) -> None:
@shared_task
def train_classifier(*, scheduled=True) -> None:
def train_classifier(
*,
scheduled=True,
status_callback: Callable[[str], None] | None = None,
) -> None:
task = PaperlessTask.objects.create(
type=PaperlessTask.TaskType.SCHEDULED_TASK
if scheduled
@@ -136,7 +140,7 @@ def train_classifier(*, scheduled=True) -> None:
classifier = DocumentClassifier()
try:
if classifier.train():
if classifier.train(status_callback=status_callback):
logger.info(
f"Saving updated classifier model to {settings.MODEL_FILE}...",
)
@@ -300,7 +304,11 @@ def update_document_content_maybe_archive_file(document_id) -> None:
mime_type = document.mime_type
parser_class: type[DocumentParser] = get_parser_class_for_mime_type(mime_type)
parser_class = get_parser_registry().get_parser_for_file(
mime_type,
document.original_filename or "",
document.source_path,
)
if not parser_class:
logger.error(
@@ -309,97 +317,91 @@ def update_document_content_maybe_archive_file(document_id) -> None:
)
return
parser: DocumentParser = parser_class(logging_group=uuid.uuid4())
with parser_class() as parser:
parser.configure(ParserContext())
try:
parser.parse(document.source_path, mime_type, document.get_public_filename())
try:
parser.parse(document.source_path, mime_type)
thumbnail = parser.get_thumbnail(
document.source_path,
mime_type,
document.get_public_filename(),
)
thumbnail = parser.get_thumbnail(document.source_path, mime_type)
with transaction.atomic():
oldDocument = Document.objects.get(pk=document.pk)
if parser.get_archive_path():
with Path(parser.get_archive_path()).open("rb") as f:
checksum = hashlib.md5(f.read()).hexdigest()
# I'm going to save first so that in case the file move
# fails, the database is rolled back.
# We also don't use save() since that triggers the filehandling
# logic, and we don't want that yet (file not yet in place)
document.archive_filename = generate_unique_filename(
document,
archive_filename=True,
)
Document.objects.filter(pk=document.pk).update(
archive_checksum=checksum,
content=parser.get_text(),
archive_filename=document.archive_filename,
)
newDocument = Document.objects.get(pk=document.pk)
if settings.AUDIT_LOG_ENABLED:
LogEntry.objects.log_create(
instance=oldDocument,
changes={
"content": [oldDocument.content, newDocument.content],
"archive_checksum": [
oldDocument.archive_checksum,
newDocument.archive_checksum,
],
"archive_filename": [
oldDocument.archive_filename,
newDocument.archive_filename,
],
},
additional_data={
"reason": "Update document content",
},
action=LogEntry.Action.UPDATE,
)
else:
Document.objects.filter(pk=document.pk).update(
content=parser.get_text(),
)
if settings.AUDIT_LOG_ENABLED:
LogEntry.objects.log_create(
instance=oldDocument,
changes={
"content": [oldDocument.content, parser.get_text()],
},
additional_data={
"reason": "Update document content",
},
action=LogEntry.Action.UPDATE,
)
with FileLock(settings.MEDIA_LOCK):
with transaction.atomic():
oldDocument = Document.objects.get(pk=document.pk)
if parser.get_archive_path():
create_source_path_directory(document.archive_path)
shutil.move(parser.get_archive_path(), document.archive_path)
shutil.move(thumbnail, document.thumbnail_path)
checksum = compute_checksum(parser.get_archive_path())
# I'm going to save first so that in case the file move
# fails, the database is rolled back.
# We also don't use save() since that triggers the filehandling
# logic, and we don't want that yet (file not yet in place)
document.archive_filename = generate_unique_filename(
document,
archive_filename=True,
)
Document.objects.filter(pk=document.pk).update(
archive_checksum=checksum,
content=parser.get_text(),
archive_filename=document.archive_filename,
)
newDocument = Document.objects.get(pk=document.pk)
if settings.AUDIT_LOG_ENABLED:
LogEntry.objects.log_create(
instance=oldDocument,
changes={
"content": [oldDocument.content, newDocument.content],
"archive_checksum": [
oldDocument.archive_checksum,
newDocument.archive_checksum,
],
"archive_filename": [
oldDocument.archive_filename,
newDocument.archive_filename,
],
},
additional_data={
"reason": "Update document content",
},
action=LogEntry.Action.UPDATE,
)
else:
Document.objects.filter(pk=document.pk).update(
content=parser.get_text(),
)
document.refresh_from_db()
logger.info(
f"Updating index for document {document_id} ({document.archive_checksum})",
)
with index.open_index_writer() as writer:
index.update_document(writer, document)
if settings.AUDIT_LOG_ENABLED:
LogEntry.objects.log_create(
instance=oldDocument,
changes={
"content": [oldDocument.content, parser.get_text()],
},
additional_data={
"reason": "Update document content",
},
action=LogEntry.Action.UPDATE,
)
ai_config = AIConfig()
if ai_config.llm_index_enabled:
llm_index_add_or_update_document(document)
with FileLock(settings.MEDIA_LOCK):
if parser.get_archive_path():
create_source_path_directory(document.archive_path)
shutil.move(parser.get_archive_path(), document.archive_path)
shutil.move(thumbnail, document.thumbnail_path)
clear_document_caches(document.pk)
document.refresh_from_db()
logger.info(
f"Updating index for document {document_id} ({document.archive_checksum})",
)
with index.open_index_writer() as writer:
index.update_document(writer, document)
except Exception:
logger.exception(
f"Error while parsing document {document} (ID: {document_id})",
)
finally:
parser.cleanup()
ai_config = AIConfig()
if ai_config.llm_index_enabled:
llm_index_add_or_update_document(document)
clear_document_caches(document.pk)
except Exception:
logger.exception(
f"Error while parsing document {document} (ID: {document_id})",
)
@shared_task
@@ -530,13 +532,13 @@ def check_scheduled_workflows() -> None:
id__in=matched_ids,
)
if documents.count() > 0:
if documents.exists():
documents = prefilter_documents_by_workflowtrigger(
documents,
trigger,
)
if documents.count() > 0:
if documents.exists():
logger.debug(
f"Found {documents.count()} documents for trigger {trigger}",
)

Some files were not shown because too many files have changed in this diff Show More