Compare commits

..

18 Commits

Author SHA1 Message Date
shamoon
df09980e44 Use the shared stuff with LLM endpoint 2026-03-12 10:42:20 -07:00
shamoon
39e5400f68 Refactor out the url validation stuff so it can be shared 2026-03-12 10:33:44 -07:00
dependabot[bot]
e2947ccff2 Chore(deps): Bump the pre-commit-dependencies group with 4 updates (#12323)
* Chore(deps): Bump the pre-commit-dependencies group with 4 updates

---
updated-dependencies:
- dependency-name: https://github.com/codespell-project/codespell
  dependency-version: 2.4.2
  dependency-type: direct:production
  dependency-group: pre-commit-dependencies
- dependency-name: prettier
  dependency-version: 3.8.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: pre-commit-dependencies
- dependency-name: prettier-plugin-organize-imports
  dependency-version: 4.3.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: pre-commit-dependencies
- dependency-name: https://github.com/lovesegfault/beautysh
  dependency-version: 6.4.3
  dependency-type: direct:production
  dependency-group: pre-commit-dependencies
...

Signed-off-by: dependabot[bot] <support@github.com>

* Drop this, it seems more trouble than its worth

* Re-run prek with new prettier

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: shamoon <4887959+shamoon@users.noreply.github.com>
2026-03-12 16:29:57 +00:00
dependabot[bot]
61841a767b Chore(deps): Bump the actions group with 3 updates (#12322)
Bumps the actions group with 3 updates: [docker/setup-buildx-action](https://github.com/docker/setup-buildx-action), [docker/login-action](https://github.com/docker/login-action) and [actions/setup-node](https://github.com/actions/setup-node).


Updates `docker/setup-buildx-action` from 3.12.0 to 4.0.0
- [Release notes](https://github.com/docker/setup-buildx-action/releases)
- [Commits](https://github.com/docker/setup-buildx-action/compare/v3.12.0...v4.0.0)

Updates `docker/login-action` from 3.7.0 to 4.0.0
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](https://github.com/docker/login-action/compare/v3.7.0...v4.0.0)

Updates `actions/setup-node` from 6.2.0 to 6.3.0
- [Release notes](https://github.com/actions/setup-node/releases)
- [Commits](https://github.com/actions/setup-node/compare/v6.2.0...v6.3.0)

---
updated-dependencies:
- dependency-name: docker/setup-buildx-action
  dependency-version: 4.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: actions
- dependency-name: docker/login-action
  dependency-version: 4.0.0
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: actions
- dependency-name: actions/setup-node
  dependency-version: 6.3.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: actions
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Trenton H <797416+stumpylog@users.noreply.github.com>
2026-03-12 09:04:22 -07:00
GitHub Actions
15db023caa Auto translate strings 2026-03-12 15:44:21 +00:00
shamoon
45b363659e Chore: mark document detail email action as deprecated (#12308) 2026-03-12 15:42:14 +00:00
Trenton H
7494161c95 Add dependency groups for pre-commit dependencies 2026-03-12 08:04:21 -07:00
Trenton H
5331312699 Remove cooldown for pre-commit updates (it's not supported)
Removed the default cooldown period for pre-commit updates.
2026-03-12 07:59:27 -07:00
Trenton H
b5a002b8ed Chore: Enable dependabot for pre-commit (#12305) 2026-03-12 07:52:43 -07:00
shamoon
dd8573242d Update api version for frontend dev server 2026-03-12 01:24:38 -07:00
Trenton H
86fa74c115 Fix: Postgres selection, DBENGINE and migrations (#12299) 2026-03-11 11:54:24 -07:00
shamoon
b7b9e83f37 Fix (dev): include DatePipe in BulkEditor unit test 2026-03-11 00:01:06 -07:00
GitHub Actions
217b5df591 Auto translate strings 2026-03-10 23:47:25 +00:00
shamoon
3efc9a5733 Fix: use effective content for matching and suggestion content (#12293) 2026-03-10 23:45:56 +00:00
shamoon
e19f341974 Fix: Pin filelock to ~=3.20.3 (#12297) 2026-03-10 13:38:23 -07:00
GitHub Actions
2b4ea570ef Auto translate strings 2026-03-10 18:58:20 +00:00
shamoon
86573fc1a0 Chore: separate actions from bulk edit endpoint (#12286) 2026-03-10 18:55:36 +00:00
dependabot[bot]
3856ec19c0 docker(deps): bump astral-sh/uv (#12265)
Bumps [astral-sh/uv](https://github.com/astral-sh/uv) from 0.10.7-python3.12-trixie-slim to 0.10.8-python3.12-trixie-slim.
- [Release notes](https://github.com/astral-sh/uv/releases)
- [Changelog](https://github.com/astral-sh/uv/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/uv/compare/0.10.7...0.10.8)

---
updated-dependencies:
- dependency-name: astral-sh/uv
  dependency-version: 0.10.8-python3.12-trixie-slim
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-03-10 17:27:06 +00:00
106 changed files with 7986 additions and 9748 deletions

View File

@@ -12,6 +12,8 @@ updates:
open-pull-requests-limit: 10
schedule:
interval: "monthly"
cooldown:
default-days: 7
labels:
- "frontend"
- "dependencies"
@@ -36,7 +38,9 @@ updates:
directory: "/"
# Check for updates once a week
schedule:
interval: "weekly"
interval: "monthly"
cooldown:
default-days: 7
labels:
- "backend"
- "dependencies"
@@ -97,6 +101,8 @@ updates:
schedule:
# Check for updates to GitHub Actions every month
interval: "monthly"
cooldown:
default-days: 7
labels:
- "ci-cd"
- "dependencies"
@@ -112,7 +118,9 @@ updates:
- "/"
- "/.devcontainer/"
schedule:
interval: "weekly"
interval: "monthly"
cooldown:
default-days: 7
open-pull-requests-limit: 5
labels:
- "dependencies"
@@ -123,7 +131,9 @@ updates:
- package-ecosystem: "docker-compose"
directory: "/docker/compose/"
schedule:
interval: "weekly"
interval: "monthly"
cooldown:
default-days: 7
open-pull-requests-limit: 5
labels:
- "dependencies"
@@ -147,3 +157,11 @@ updates:
postgres:
patterns:
- "docker.io/library/postgres*"
- package-ecosystem: "pre-commit" # See documentation for possible values
directory: "/" # Location of package manifests
schedule:
interval: "monthly"
groups:
pre-commit-dependencies:
patterns:
- "*"

View File

@@ -104,9 +104,9 @@ jobs:
echo "repository=${repo_name}"
echo "name=${repo_name}" >> $GITHUB_OUTPUT
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3.12.0
uses: docker/setup-buildx-action@v4.0.0
- name: Login to GitHub Container Registry
uses: docker/login-action@v3.7.0
uses: docker/login-action@v4.0.0
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
@@ -179,22 +179,22 @@ jobs:
echo "Downloaded digests:"
ls -la /tmp/digests/
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3.12.0
uses: docker/setup-buildx-action@v4.0.0
- name: Login to GitHub Container Registry
uses: docker/login-action@v3.7.0
uses: docker/login-action@v4.0.0
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Docker Hub
if: needs.build-arch.outputs.push-external == 'true'
uses: docker/login-action@v3.7.0
uses: docker/login-action@v4.0.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to Quay.io
if: needs.build-arch.outputs.push-external == 'true'
uses: docker/login-action@v3.7.0
uses: docker/login-action@v4.0.0
with:
registry: quay.io
username: ${{ secrets.QUAY_USERNAME }}

View File

@@ -67,7 +67,7 @@ jobs:
with:
version: 10
- name: Use Node.js 24
uses: actions/setup-node@v6.2.0
uses: actions/setup-node@v6.3.0
with:
node-version: 24.x
cache: 'pnpm'
@@ -95,7 +95,7 @@ jobs:
with:
version: 10
- name: Use Node.js 24
uses: actions/setup-node@v6.2.0
uses: actions/setup-node@v6.3.0
with:
node-version: 24.x
cache: 'pnpm'
@@ -130,7 +130,7 @@ jobs:
with:
version: 10
- name: Use Node.js 24
uses: actions/setup-node@v6.2.0
uses: actions/setup-node@v6.3.0
with:
node-version: 24.x
cache: 'pnpm'
@@ -181,7 +181,7 @@ jobs:
with:
version: 10
- name: Use Node.js 24
uses: actions/setup-node@v6.2.0
uses: actions/setup-node@v6.3.0
with:
node-version: 24.x
cache: 'pnpm'
@@ -214,7 +214,7 @@ jobs:
with:
version: 10
- name: Use Node.js 24
uses: actions/setup-node@v6.2.0
uses: actions/setup-node@v6.3.0
with:
node-version: 24.x
cache: 'pnpm'

View File

@@ -35,7 +35,7 @@ jobs:
with:
version: 10
- name: Use Node.js 24
uses: actions/setup-node@v6.2.0
uses: actions/setup-node@v6.3.0
with:
node-version: 24.x
cache: 'pnpm'

View File

@@ -40,7 +40,7 @@ jobs:
with:
version: 10
- name: Use Node.js 24
uses: actions/setup-node@v6.2.0
uses: actions/setup-node@v6.3.0
with:
node-version: 24.x
cache: 'pnpm'

View File

@@ -29,7 +29,7 @@ repos:
- id: check-case-conflict
- id: detect-private-key
- repo: https://github.com/codespell-project/codespell
rev: v2.4.1
rev: v2.4.2
hooks:
- id: codespell
additional_dependencies: [tomli]
@@ -46,8 +46,8 @@ repos:
- ts
- markdown
additional_dependencies:
- prettier@3.3.3
- 'prettier-plugin-organize-imports@4.1.0'
- prettier@3.8.1
- 'prettier-plugin-organize-imports@4.3.0'
# Python hooks
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.15.5
@@ -65,7 +65,7 @@ repos:
- id: hadolint
# Shell script hooks
- repo: https://github.com/lovesegfault/beautysh
rev: v6.4.2
rev: v6.4.3
hooks:
- id: beautysh
types: [file]

View File

@@ -5,14 +5,6 @@ const config = {
singleQuote: true,
// https://prettier.io/docs/en/options.html#trailing-commas
trailingComma: 'es5',
overrides: [
{
files: ['docs/*.md'],
options: {
tabWidth: 4,
},
},
],
plugins: [require('prettier-plugin-organize-imports')],
}

View File

@@ -30,7 +30,7 @@ RUN set -eux \
# Purpose: Installs s6-overlay and rootfs
# Comments:
# - Don't leave anything extra in here either
FROM ghcr.io/astral-sh/uv:0.10.7-python3.12-trixie-slim AS s6-overlay-base
FROM ghcr.io/astral-sh/uv:0.10.9-python3.12-trixie-slim AS s6-overlay-base
WORKDIR /usr/src/s6

View File

@@ -56,6 +56,7 @@ services:
environment:
PAPERLESS_REDIS: redis://broker:6379
PAPERLESS_DBHOST: db
PAPERLESS_DBENGINE: postgres
env_file:
- stack.env
volumes:

View File

@@ -62,6 +62,7 @@ services:
environment:
PAPERLESS_REDIS: redis://broker:6379
PAPERLESS_DBHOST: db
PAPERLESS_DBENGINE: postgresql
PAPERLESS_TIKA_ENABLED: 1
PAPERLESS_TIKA_GOTENBERG_ENDPOINT: http://gotenberg:3000
PAPERLESS_TIKA_ENDPOINT: http://tika:9998

View File

@@ -56,6 +56,7 @@ services:
environment:
PAPERLESS_REDIS: redis://broker:6379
PAPERLESS_DBHOST: db
PAPERLESS_DBENGINE: postgresql
volumes:
data:
media:

View File

@@ -51,6 +51,7 @@ services:
env_file: docker-compose.env
environment:
PAPERLESS_REDIS: redis://broker:6379
PAPERLESS_DBENGINE: sqlite
PAPERLESS_TIKA_ENABLED: 1
PAPERLESS_TIKA_GOTENBERG_ENDPOINT: http://gotenberg:3000
PAPERLESS_TIKA_ENDPOINT: http://tika:9998

View File

@@ -42,6 +42,7 @@ services:
env_file: docker-compose.env
environment:
PAPERLESS_REDIS: redis://broker:6379
PAPERLESS_DBENGINE: sqlite
volumes:
data:
media:

View File

@@ -10,8 +10,12 @@ cd "${PAPERLESS_SRC_DIR}"
# The whole migrate, with flock, needs to run as the right user
if [[ -n "${USER_IS_NON_ROOT}" ]]; then
exec s6-setlock -n "${data_dir}/migration_lock" python3 manage.py check --tag compatibility paperless
exec s6-setlock -n "${data_dir}/migration_lock" python3 manage.py migrate --skip-checks --no-input
else
exec s6-setuidgid paperless \
s6-setlock -n "${data_dir}/migration_lock" \
python3 manage.py check --tag compatibility paperless
exec s6-setuidgid paperless \
s6-setlock -n "${data_dir}/migration_lock" \
python3 manage.py migrate --skip-checks --no-input

View File

@@ -10,16 +10,16 @@ consuming documents at that time.
Options available to any installation of paperless:
- Use the [document exporter](#exporter). The document exporter exports all your documents,
thumbnails, metadata, and database contents to a specific folder. You may import your
documents and settings into a fresh instance of paperless again or store your
documents in another DMS with this export.
- Use the [document exporter](#exporter). The document exporter exports all your documents,
thumbnails, metadata, and database contents to a specific folder. You may import your
documents and settings into a fresh instance of paperless again or store your
documents in another DMS with this export.
The document exporter is also able to update an already existing
export. Therefore, incremental backups with `rsync` are entirely
possible.
The document exporter is also able to update an already existing
export. Therefore, incremental backups with `rsync` are entirely
possible.
The exporter does not include API tokens and they will need to be re-generated after importing.
The exporter does not include API tokens and they will need to be re-generated after importing.
!!! caution
@@ -29,28 +29,27 @@ Options available to any installation of paperless:
Options available to docker installations:
- Backup the docker volumes. These usually reside within
`/var/lib/docker/volumes` on the host and you need to be root in
order to access them.
- Backup the docker volumes. These usually reside within
`/var/lib/docker/volumes` on the host and you need to be root in
order to access them.
Paperless uses 4 volumes:
- `paperless_media`: This is where your documents are stored.
- `paperless_data`: This is where auxiliary data is stored. This
folder also contains the SQLite database, if you use it.
- `paperless_pgdata`: Exists only if you use PostgreSQL and
contains the database.
- `paperless_dbdata`: Exists only if you use MariaDB and contains
the database.
Paperless uses 4 volumes:
- `paperless_media`: This is where your documents are stored.
- `paperless_data`: This is where auxiliary data is stored. This
folder also contains the SQLite database, if you use it.
- `paperless_pgdata`: Exists only if you use PostgreSQL and
contains the database.
- `paperless_dbdata`: Exists only if you use MariaDB and contains
the database.
Options available to bare-metal and non-docker installations:
- Backup the entire paperless folder. This ensures that if your
paperless instance crashes at some point or your disk fails, you can
simply copy the folder back into place and it works.
- Backup the entire paperless folder. This ensures that if your
paperless instance crashes at some point or your disk fails, you can
simply copy the folder back into place and it works.
When using PostgreSQL or MariaDB, you'll also have to backup the
database.
When using PostgreSQL or MariaDB, you'll also have to backup the
database.
### Restoring {#migrating-restoring}
@@ -509,19 +508,19 @@ collection for issues.
The issues detected by the sanity checker are as follows:
- Missing original files.
- Missing archive files.
- Inaccessible original files due to improper permissions.
- Inaccessible archive files due to improper permissions.
- Corrupted original documents by comparing their checksum against
what is stored in the database.
- Corrupted archive documents by comparing their checksum against what
is stored in the database.
- Missing thumbnails.
- Inaccessible thumbnails due to improper permissions.
- Documents without any content (warning).
- Orphaned files in the media directory (warning). These are files
that are not referenced by any document in paperless.
- Missing original files.
- Missing archive files.
- Inaccessible original files due to improper permissions.
- Inaccessible archive files due to improper permissions.
- Corrupted original documents by comparing their checksum against
what is stored in the database.
- Corrupted archive documents by comparing their checksum against what
is stored in the database.
- Missing thumbnails.
- Inaccessible thumbnails due to improper permissions.
- Documents without any content (warning).
- Orphaned files in the media directory (warning). These are files
that are not referenced by any document in paperless.
```
document_sanity_checker

View File

@@ -25,20 +25,20 @@ documents.
The following algorithms are available:
- **None:** No matching will be performed.
- **Any:** Looks for any occurrence of any word provided in match in
the PDF. If you define the match as `Bank1 Bank2`, it will match
documents containing either of these terms.
- **All:** Requires that every word provided appears in the PDF,
albeit not in the order provided.
- **Exact:** Matches only if the match appears exactly as provided
(i.e. preserve ordering) in the PDF.
- **Regular expression:** Parses the match as a regular expression and
tries to find a match within the document.
- **Fuzzy match:** Uses a partial matching based on locating the tag text
inside the document, using a [partial ratio](https://rapidfuzz.github.io/RapidFuzz/Usage/fuzz.html#partial-ratio)
- **Auto:** Tries to automatically match new documents. This does not
require you to set a match. See the [notes below](#automatic-matching).
- **None:** No matching will be performed.
- **Any:** Looks for any occurrence of any word provided in match in
the PDF. If you define the match as `Bank1 Bank2`, it will match
documents containing either of these terms.
- **All:** Requires that every word provided appears in the PDF,
albeit not in the order provided.
- **Exact:** Matches only if the match appears exactly as provided
(i.e. preserve ordering) in the PDF.
- **Regular expression:** Parses the match as a regular expression and
tries to find a match within the document.
- **Fuzzy match:** Uses a partial matching based on locating the tag text
inside the document, using a [partial ratio](https://rapidfuzz.github.io/RapidFuzz/Usage/fuzz.html#partial-ratio)
- **Auto:** Tries to automatically match new documents. This does not
require you to set a match. See the [notes below](#automatic-matching).
When using the _any_ or _all_ matching algorithms, you can search for
terms that consist of multiple words by enclosing them in double quotes.
@@ -69,33 +69,33 @@ Paperless tries to hide much of the involved complexity with this
approach. However, there are a couple caveats you need to keep in mind
when using this feature:
- Changes to your documents are not immediately reflected by the
matching algorithm. The neural network needs to be _trained_ on your
documents after changes. Paperless periodically (default: once each
hour) checks for changes and does this automatically for you.
- The Auto matching algorithm only takes documents into account which
are NOT placed in your inbox (i.e. have any inbox tags assigned to
them). This ensures that the neural network only learns from
documents which you have correctly tagged before.
- The matching algorithm can only work if there is a correlation
between the tag, correspondent, document type, or storage path and
the document itself. Your bank statements usually contain your bank
account number and the name of the bank, so this works reasonably
well, However, tags such as "TODO" cannot be automatically
assigned.
- The matching algorithm needs a reasonable number of documents to
identify when to assign tags, correspondents, storage paths, and
types. If one out of a thousand documents has the correspondent
"Very obscure web shop I bought something five years ago", it will
probably not assign this correspondent automatically if you buy
something from them again. The more documents, the better.
- Paperless also needs a reasonable amount of negative examples to
decide when not to assign a certain tag, correspondent, document
type, or storage path. This will usually be the case as you start
filling up paperless with documents. Example: If all your documents
are either from "Webshop" or "Bank", paperless will assign one
of these correspondents to ANY new document, if both are set to
automatic matching.
- Changes to your documents are not immediately reflected by the
matching algorithm. The neural network needs to be _trained_ on your
documents after changes. Paperless periodically (default: once each
hour) checks for changes and does this automatically for you.
- The Auto matching algorithm only takes documents into account which
are NOT placed in your inbox (i.e. have any inbox tags assigned to
them). This ensures that the neural network only learns from
documents which you have correctly tagged before.
- The matching algorithm can only work if there is a correlation
between the tag, correspondent, document type, or storage path and
the document itself. Your bank statements usually contain your bank
account number and the name of the bank, so this works reasonably
well, However, tags such as "TODO" cannot be automatically
assigned.
- The matching algorithm needs a reasonable number of documents to
identify when to assign tags, correspondents, storage paths, and
types. If one out of a thousand documents has the correspondent
"Very obscure web shop I bought something five years ago", it will
probably not assign this correspondent automatically if you buy
something from them again. The more documents, the better.
- Paperless also needs a reasonable amount of negative examples to
decide when not to assign a certain tag, correspondent, document
type, or storage path. This will usually be the case as you start
filling up paperless with documents. Example: If all your documents
are either from "Webshop" or "Bank", paperless will assign one
of these correspondents to ANY new document, if both are set to
automatic matching.
## Hooking into the consumption process {#consume-hooks}
@@ -243,12 +243,12 @@ webserver:
Troubleshooting:
- Monitor the Docker Compose log
`cd ~/paperless-ngx; docker compose logs -f`
- Check your script's permission e.g. in case of permission error
`sudo chmod 755 post-consumption-example.sh`
- Pipe your scripts's output to a log file e.g.
`echo "${DOCUMENT_ID}" | tee --append /usr/src/paperless/scripts/post-consumption-example.log`
- Monitor the Docker Compose log
`cd ~/paperless-ngx; docker compose logs -f`
- Check your script's permission e.g. in case of permission error
`sudo chmod 755 post-consumption-example.sh`
- Pipe your scripts's output to a log file e.g.
`echo "${DOCUMENT_ID}" | tee --append /usr/src/paperless/scripts/post-consumption-example.log`
## File name handling {#file-name-handling}
@@ -307,35 +307,35 @@ will create a directory structure as follows:
Paperless provides the following variables for use within filenames:
- `{{ asn }}`: The archive serial number of the document, or "none".
- `{{ correspondent }}`: The name of the correspondent, or "none".
- `{{ document_type }}`: The name of the document type, or "none".
- `{{ tag_list }}`: A comma separated list of all tags assigned to the
document.
- `{{ title }}`: The title of the document.
- `{{ created }}`: The full date (ISO 8601 format, e.g. `2024-03-14`) the document was created.
- `{{ created_year }}`: Year created only, formatted as the year with
century.
- `{{ created_year_short }}`: Year created only, formatted as the year
without century, zero padded.
- `{{ created_month }}`: Month created only (number 01-12).
- `{{ created_month_name }}`: Month created name, as per locale
- `{{ created_month_name_short }}`: Month created abbreviated name, as per
locale
- `{{ created_day }}`: Day created only (number 01-31).
- `{{ added }}`: The full date (ISO format) the document was added to
paperless.
- `{{ added_year }}`: Year added only.
- `{{ added_year_short }}`: Year added only, formatted as the year without
century, zero padded.
- `{{ added_month }}`: Month added only (number 01-12).
- `{{ added_month_name }}`: Month added name, as per locale
- `{{ added_month_name_short }}`: Month added abbreviated name, as per
locale
- `{{ added_day }}`: Day added only (number 01-31).
- `{{ owner_username }}`: Username of document owner, if any, or "none"
- `{{ original_name }}`: Document original filename, minus the extension, if any, or "none"
- `{{ doc_pk }}`: The paperless identifier (primary key) for the document.
- `{{ asn }}`: The archive serial number of the document, or "none".
- `{{ correspondent }}`: The name of the correspondent, or "none".
- `{{ document_type }}`: The name of the document type, or "none".
- `{{ tag_list }}`: A comma separated list of all tags assigned to the
document.
- `{{ title }}`: The title of the document.
- `{{ created }}`: The full date (ISO 8601 format, e.g. `2024-03-14`) the document was created.
- `{{ created_year }}`: Year created only, formatted as the year with
century.
- `{{ created_year_short }}`: Year created only, formatted as the year
without century, zero padded.
- `{{ created_month }}`: Month created only (number 01-12).
- `{{ created_month_name }}`: Month created name, as per locale
- `{{ created_month_name_short }}`: Month created abbreviated name, as per
locale
- `{{ created_day }}`: Day created only (number 01-31).
- `{{ added }}`: The full date (ISO format) the document was added to
paperless.
- `{{ added_year }}`: Year added only.
- `{{ added_year_short }}`: Year added only, formatted as the year without
century, zero padded.
- `{{ added_month }}`: Month added only (number 01-12).
- `{{ added_month_name }}`: Month added name, as per locale
- `{{ added_month_name_short }}`: Month added abbreviated name, as per
locale
- `{{ added_day }}`: Day added only (number 01-31).
- `{{ owner_username }}`: Username of document owner, if any, or "none"
- `{{ original_name }}`: Document original filename, minus the extension, if any, or "none"
- `{{ doc_pk }}`: The paperless identifier (primary key) for the document.
!!! warning
@@ -388,10 +388,10 @@ before empty placeholders are removed as well, empty directories are omitted.
When a single storage layout is not sufficient for your use case, storage paths allow for more complex
structure to set precisely where each document is stored in the file system.
- Each storage path is a [`PAPERLESS_FILENAME_FORMAT`](configuration.md#PAPERLESS_FILENAME_FORMAT) and
follows the rules described above
- Each document is assigned a storage path using the matching algorithms described above, but can be
overwritten at any time
- Each storage path is a [`PAPERLESS_FILENAME_FORMAT`](configuration.md#PAPERLESS_FILENAME_FORMAT) and
follows the rules described above
- Each document is assigned a storage path using the matching algorithms described above, but can be
overwritten at any time
For example, you could define the following two storage paths:
@@ -457,13 +457,13 @@ The `get_cf_value` filter retrieves a value from custom field data with optional
###### Parameters
- `custom_fields`: This _must_ be the provided custom field data
- `name` (str): Name of the custom field to retrieve
- `default` (str, optional): Default value to return if field is not found or has no value
- `custom_fields`: This _must_ be the provided custom field data
- `name` (str): Name of the custom field to retrieve
- `default` (str, optional): Default value to return if field is not found or has no value
###### Returns
- `str | None`: The field value, default value, or `None` if neither exists
- `str | None`: The field value, default value, or `None` if neither exists
###### Examples
@@ -487,12 +487,12 @@ The `datetime` filter formats a datetime string or datetime object using Python'
###### Parameters
- `value` (str | datetime): Date/time value to format (strings will be parsed automatically)
- `format` (str): Python strftime format string
- `value` (str | datetime): Date/time value to format (strings will be parsed automatically)
- `format` (str): Python strftime format string
###### Returns
- `str`: Formatted datetime string
- `str`: Formatted datetime string
###### Examples
@@ -525,13 +525,13 @@ An ISO string can also be provided to control the output format.
###### Parameters
- `value` (date | datetime | str): Date, datetime object or ISO string to format (datetime should be timezone-aware)
- `format` (str): Format type - either a Babel preset ('short', 'medium', 'long', 'full') or custom pattern
- `locale` (str): Locale code for localization (e.g., 'en_US', 'fr_FR', 'de_DE')
- `value` (date | datetime | str): Date, datetime object or ISO string to format (datetime should be timezone-aware)
- `format` (str): Format type - either a Babel preset ('short', 'medium', 'long', 'full') or custom pattern
- `locale` (str): Locale code for localization (e.g., 'en_US', 'fr_FR', 'de_DE')
###### Returns
- `str`: Localized, formatted date string
- `str`: Localized, formatted date string
###### Examples
@@ -565,15 +565,15 @@ See the [supported format codes](https://unicode.org/reports/tr35/tr35-dates.htm
### Format Presets
- **short**: Abbreviated format (e.g., "1/15/24")
- **medium**: Medium-length format (e.g., "Jan 15, 2024")
- **long**: Long format with full month name (e.g., "January 15, 2024")
- **full**: Full format including day of week (e.g., "Monday, January 15, 2024")
- **short**: Abbreviated format (e.g., "1/15/24")
- **medium**: Medium-length format (e.g., "Jan 15, 2024")
- **long**: Long format with full month name (e.g., "January 15, 2024")
- **full**: Full format including day of week (e.g., "Monday, January 15, 2024")
#### Additional Variables
- `{{ tag_name_list }}`: A list of tag names applied to the document, ordered by the tag name. Note this is a list, not a single string
- `{{ custom_fields }}`: A mapping of custom field names to their type and value. A user can access the mapping by field name or check if a field is applied by checking its existence in the variable.
- `{{ tag_name_list }}`: A list of tag names applied to the document, ordered by the tag name. Note this is a list, not a single string
- `{{ custom_fields }}`: A mapping of custom field names to their type and value. A user can access the mapping by field name or check if a field is applied by checking its existence in the variable.
!!! tip
@@ -675,15 +675,15 @@ installation, you can use volumes to accomplish this:
```yaml
services:
# ...
webserver:
environment:
- PAPERLESS_ENABLE_FLOWER
ports:
- 5555:5555 # (2)!
# ...
webserver:
environment:
- PAPERLESS_ENABLE_FLOWER
ports:
- 5555:5555 # (2)!
# ...
volumes:
- /path/to/my/flowerconfig.py:/usr/src/paperless/src/paperless/flowerconfig.py:ro # (1)!
volumes:
- /path/to/my/flowerconfig.py:/usr/src/paperless/src/paperless/flowerconfig.py:ro # (1)!
```
1. Note the `:ro` tag means the file will be mounted as read only.
@@ -714,11 +714,11 @@ For example, using Docker Compose:
```yaml
services:
# ...
webserver:
# ...
webserver:
# ...
volumes:
- /path/to/my/scripts:/custom-cont-init.d:ro # (1)!
volumes:
- /path/to/my/scripts:/custom-cont-init.d:ro # (1)!
```
1. Note the `:ro` tag means the folder will be mounted as read only. This is for extra security against changes
@@ -771,16 +771,16 @@ Paperless is able to utilize barcodes for automatically performing some tasks.
At this time, the library utilized for detection of barcodes supports the following types:
- AN-13/UPC-A
- UPC-E
- EAN-8
- Code 128
- Code 93
- Code 39
- Codabar
- Interleaved 2 of 5
- QR Code
- SQ Code
- AN-13/UPC-A
- UPC-E
- EAN-8
- Code 128
- Code 93
- Code 39
- Codabar
- Interleaved 2 of 5
- QR Code
- SQ Code
For usage in Paperless, the type of barcode does not matter, only the contents of it.
@@ -793,8 +793,8 @@ below.
If document splitting is enabled, Paperless splits _after_ a separator barcode by default.
This means:
- any page containing the configured separator barcode starts a new document, starting with the **next** page
- pages containing the separator barcode are discarded
- any page containing the configured separator barcode starts a new document, starting with the **next** page
- pages containing the separator barcode are discarded
This is intended for dedicated separator sheets such as PATCH-T pages.
@@ -831,10 +831,10 @@ to `true`.
When enabled, documents will be split at pages containing tag barcodes, similar to how
ASN barcodes work. Key features:
- The page with the tag barcode is **retained** in the resulting document
- **Each split document extracts its own tags** - only tags on pages within that document are assigned
- Multiple tag barcodes can trigger multiple splits in the same document
- Works seamlessly with ASN barcodes - each split document gets its own ASN and tags
- The page with the tag barcode is **retained** in the resulting document
- **Each split document extracts its own tags** - only tags on pages within that document are assigned
- Multiple tag barcodes can trigger multiple splits in the same document
- Works seamlessly with ASN barcodes - each split document gets its own ASN and tags
This is useful for batch scanning where you place tag barcode pages between different
documents to both separate and categorize them in a single operation.
@@ -996,9 +996,9 @@ If using docker, you'll need to add the following volume mounts to your `docker-
```yaml
webserver:
volumes:
- /home/user/.gnupg/pubring.gpg:/usr/src/paperless/.gnupg/pubring.gpg
- <path to gpg-agent socket>:/usr/src/paperless/.gnupg/S.gpg-agent
volumes:
- /home/user/.gnupg/pubring.gpg:/usr/src/paperless/.gnupg/pubring.gpg
- <path to gpg-agent socket>:/usr/src/paperless/.gnupg/S.gpg-agent
```
For a 'bare-metal' installation no further configuration is necessary. If you
@@ -1006,9 +1006,9 @@ want to use a separate `GNUPG_HOME`, you can do so by configuring the [PAPERLESS
### Troubleshooting
- Make sure, that `gpg-agent` is running on your host machine
- Make sure, that encryption and decryption works from inside the container using the `gpg` commands from above.
- Check that all files in `/usr/src/paperless/.gnupg` have correct permissions
- Make sure, that `gpg-agent` is running on your host machine
- Make sure, that encryption and decryption works from inside the container using the `gpg` commands from above.
- Check that all files in `/usr/src/paperless/.gnupg` have correct permissions
```shell
paperless@9da1865df327:~/.gnupg$ ls -al

View File

@@ -66,10 +66,10 @@ Full text searching is available on the `/api/documents/` endpoint. Two
specific query parameters cause the API to return full text search
results:
- `/api/documents/?query=your%20search%20query`: Search for a document
using a full text query. For details on the syntax, see [Basic Usage - Searching](usage.md#basic-usage_searching).
- `/api/documents/?more_like_id=1234`: Search for documents similar to
the document with id 1234.
- `/api/documents/?query=your%20search%20query`: Search for a document
using a full text query. For details on the syntax, see [Basic Usage - Searching](usage.md#basic-usage_searching).
- `/api/documents/?more_like_id=1234`: Search for documents similar to
the document with id 1234.
Pagination works exactly the same as it does for normal requests on this
endpoint.
@@ -106,12 +106,12 @@ attribute with various information about the search results:
}
```
- `score` is an indication how well this document matches the query
relative to the other search results.
- `highlights` is an excerpt from the document content and highlights
the search terms with `<span>` tags as shown above.
- `rank` is the index of the search results. The first result will
have rank 0.
- `score` is an indication how well this document matches the query
relative to the other search results.
- `highlights` is an excerpt from the document content and highlights
the search terms with `<span>` tags as shown above.
- `rank` is the index of the search results. The first result will
have rank 0.
### Filtering by custom fields
@@ -122,33 +122,33 @@ use cases:
1. Documents with a custom field "due" (date) between Aug 1, 2024 and
Sept 1, 2024 (inclusive):
`?custom_field_query=["due", "range", ["2024-08-01", "2024-09-01"]]`
`?custom_field_query=["due", "range", ["2024-08-01", "2024-09-01"]]`
2. Documents with a custom field "customer" (text) that equals "bob"
(case sensitive):
`?custom_field_query=["customer", "exact", "bob"]`
`?custom_field_query=["customer", "exact", "bob"]`
3. Documents with a custom field "answered" (boolean) set to `true`:
`?custom_field_query=["answered", "exact", true]`
`?custom_field_query=["answered", "exact", true]`
4. Documents with a custom field "favorite animal" (select) set to either
"cat" or "dog":
`?custom_field_query=["favorite animal", "in", ["cat", "dog"]]`
`?custom_field_query=["favorite animal", "in", ["cat", "dog"]]`
5. Documents with a custom field "address" (text) that is empty:
`?custom_field_query=["OR", [["address", "isnull", true], ["address", "exact", ""]]]`
`?custom_field_query=["OR", [["address", "isnull", true], ["address", "exact", ""]]]`
6. Documents that don't have a field called "foo":
`?custom_field_query=["foo", "exists", false]`
`?custom_field_query=["foo", "exists", false]`
7. Documents that have document links "references" to both document 3 and 7:
`?custom_field_query=["references", "contains", [3, 7]]`
`?custom_field_query=["references", "contains", [3, 7]]`
All field types support basic operations including `exact`, `in`, `isnull`,
and `exists`. String, URL, and monetary fields support case-insensitive
@@ -164,8 +164,8 @@ Get auto completions for a partial search term.
Query parameters:
- `term`: The incomplete term.
- `limit`: Amount of results. Defaults to 10.
- `term`: The incomplete term.
- `limit`: Amount of results. Defaults to 10.
Results returned by the endpoint are ordered by importance of the term
in the document index. The first result is the term that has the highest
@@ -189,19 +189,19 @@ from there.
The endpoint supports the following optional form fields:
- `title`: Specify a title that the consumer should use for the
document.
- `created`: Specify a DateTime where the document was created (e.g.
"2016-04-19" or "2016-04-19 06:15:00+02:00").
- `correspondent`: Specify the ID of a correspondent that the consumer
should use for the document.
- `document_type`: Similar to correspondent.
- `storage_path`: Similar to correspondent.
- `tags`: Similar to correspondent. Specify this multiple times to
have multiple tags added to the document.
- `archive_serial_number`: An optional archive serial number to set.
- `custom_fields`: Either an array of custom field ids to assign (with an empty
value) to the document or an object mapping field id -> value.
- `title`: Specify a title that the consumer should use for the
document.
- `created`: Specify a DateTime where the document was created (e.g.
"2016-04-19" or "2016-04-19 06:15:00+02:00").
- `correspondent`: Specify the ID of a correspondent that the consumer
should use for the document.
- `document_type`: Similar to correspondent.
- `storage_path`: Similar to correspondent.
- `tags`: Similar to correspondent. Specify this multiple times to
have multiple tags added to the document.
- `archive_serial_number`: An optional archive serial number to set.
- `custom_fields`: Either an array of custom field ids to assign (with an empty
value) to the document or an object mapping field id -> value.
The endpoint will immediately return HTTP 200 if the document consumption
process was started successfully, with the UUID of the consumption task
@@ -215,16 +215,16 @@ consumption including the ID of a created document if consumption succeeded.
Document versions are file-level versions linked to one root document.
- Root document metadata (title, tags, correspondent, document type, storage path, custom fields, permissions) remains shared.
- Version-specific file data (file, mime type, checksums, archive info, extracted text content) belongs to the selected/latest version.
- Root document metadata (title, tags, correspondent, document type, storage path, custom fields, permissions) remains shared.
- Version-specific file data (file, mime type, checksums, archive info, extracted text content) belongs to the selected/latest version.
Version-aware endpoints:
- `GET /api/documents/{id}/`: returns root document data; `content` resolves to latest version content by default. Use `?version={version_id}` to resolve content for a specific version.
- `PATCH /api/documents/{id}/`: content updates target the selected version (`?version={version_id}`) or latest version by default; non-content metadata updates target the root document.
- `GET /api/documents/{id}/download/`, `GET /api/documents/{id}/preview/`, `GET /api/documents/{id}/thumb/`, `GET /api/documents/{id}/metadata/`: accept `?version={version_id}`.
- `POST /api/documents/{id}/update_version/`: uploads a new version using multipart form field `document` and optional `version_label`.
- `DELETE /api/documents/{root_id}/versions/{version_id}/`: deletes a non-root version.
- `GET /api/documents/{id}/`: returns root document data; `content` resolves to latest version content by default. Use `?version={version_id}` to resolve content for a specific version.
- `PATCH /api/documents/{id}/`: content updates target the selected version (`?version={version_id}`) or latest version by default; non-content metadata updates target the root document.
- `GET /api/documents/{id}/download/`, `GET /api/documents/{id}/preview/`, `GET /api/documents/{id}/thumb/`, `GET /api/documents/{id}/metadata/`: accept `?version={version_id}`.
- `POST /api/documents/{id}/update_version/`: uploads a new version using multipart form field `document` and optional `version_label`.
- `DELETE /api/documents/{root_id}/versions/{version_id}/`: deletes a non-root version.
## Permissions
@@ -282,74 +282,38 @@ a json payload of the format:
The following methods are supported:
- `set_correspondent`
- Requires `parameters`: `{ "correspondent": CORRESPONDENT_ID }`
- `set_document_type`
- Requires `parameters`: `{ "document_type": DOCUMENT_TYPE_ID }`
- `set_storage_path`
- Requires `parameters`: `{ "storage_path": STORAGE_PATH_ID }`
- `add_tag`
- Requires `parameters`: `{ "tag": TAG_ID }`
- `remove_tag`
- Requires `parameters`: `{ "tag": TAG_ID }`
- `modify_tags`
- Requires `parameters`: `{ "add_tags": [LIST_OF_TAG_IDS] }` and `{ "remove_tags": [LIST_OF_TAG_IDS] }`
- `delete`
- No `parameters` required
- `reprocess`
- No `parameters` required
- `set_permissions`
- Requires `parameters`:
- `"set_permissions": PERMISSIONS_OBJ` (see format [above](#permissions)) and / or
- `"owner": OWNER_ID or null`
- `"merge": true or false` (defaults to false)
- The `merge` flag determines if the supplied permissions will overwrite all existing permissions (including
removing them) or be merged with existing permissions.
- `edit_pdf`
- Requires `parameters`:
- `"doc_ids": [DOCUMENT_ID]` A list of a single document ID to edit.
- `"operations": [OPERATION, ...]` A list of operations to perform on the documents. Each operation is a dictionary
with the following keys:
- `"page": PAGE_NUMBER` The page number to edit (1-based).
- `"rotate": DEGREES` Optional rotation in degrees (90, 180, 270).
- `"doc": OUTPUT_DOCUMENT_INDEX` Optional index of the output document for split operations.
- Optional `parameters`:
- `"delete_original": true` to delete the original documents after editing.
- `"update_document": true` to add the edited PDF as a new version of the root document.
- `"include_metadata": true` to copy metadata from the original document to the edited document.
- `remove_password`
- Requires `parameters`:
- `"password": "PASSWORD_STRING"` The password to remove from the PDF documents.
- Optional `parameters`:
- `"update_document": true` to add the password-less PDF as a new version of the root document.
- `"delete_original": true` to delete the original document after editing.
- `"include_metadata": true` to copy metadata from the original document to the new password-less document.
- `merge`
- No additional `parameters` required.
- The ordering of the merged document is determined by the list of IDs.
- Optional `parameters`:
- `"metadata_document_id": DOC_ID` apply metadata (tags, correspondent, etc.) from this document to the merged document.
- `"delete_originals": true` to delete the original documents. This requires the calling user being the owner of
all documents that are merged.
- `split`
- Requires `parameters`:
- `"pages": [..]` The list should be a list of pages and/or a ranges, separated by commas e.g. `"[1,2-3,4,5-7]"`
- Optional `parameters`:
- `"delete_originals": true` to delete the original document after consumption. This requires the calling user being the owner of
the document.
- The split operation only accepts a single document.
- `rotate`
- Requires `parameters`:
- `"degrees": DEGREES`. Must be an integer i.e. 90, 180, 270
- `delete_pages`
- Requires `parameters`:
- `"pages": [..]` The list should be a list of integers e.g. `"[2,3,4]"`
- The delete_pages operation only accepts a single document.
- `modify_custom_fields`
- Requires `parameters`:
- `"add_custom_fields": { CUSTOM_FIELD_ID: VALUE }`: JSON object consisting of custom field id:value pairs to add to the document, can also be a list of custom field IDs
to add with empty values.
- `"remove_custom_fields": [CUSTOM_FIELD_ID]`: custom field ids to remove from the document.
- `set_correspondent`
- Requires `parameters`: `{ "correspondent": CORRESPONDENT_ID }`
- `set_document_type`
- Requires `parameters`: `{ "document_type": DOCUMENT_TYPE_ID }`
- `set_storage_path`
- Requires `parameters`: `{ "storage_path": STORAGE_PATH_ID }`
- `add_tag`
- Requires `parameters`: `{ "tag": TAG_ID }`
- `remove_tag`
- Requires `parameters`: `{ "tag": TAG_ID }`
- `modify_tags`
- Requires `parameters`: `{ "add_tags": [LIST_OF_TAG_IDS] }` and `{ "remove_tags": [LIST_OF_TAG_IDS] }`
- `delete`
- No `parameters` required
- `reprocess`
- No `parameters` required
- `set_permissions`
- Requires `parameters`:
- `"set_permissions": PERMISSIONS_OBJ` (see format [above](#permissions)) and / or
- `"owner": OWNER_ID or null`
- `"merge": true or false` (defaults to false)
- The `merge` flag determines if the supplied permissions will overwrite all existing permissions (including
removing them) or be merged with existing permissions.
- `modify_custom_fields`
- Requires `parameters`:
- `"add_custom_fields": { CUSTOM_FIELD_ID: VALUE }`: JSON object consisting of custom field id:value pairs to add to the document, can also be a list of custom field IDs
to add with empty values.
- `"remove_custom_fields": [CUSTOM_FIELD_ID]`: custom field ids to remove from the document.
#### Document-editing operations
Beginning with version 10+, the API supports individual endpoints for document-editing operations (`merge`, `rotate`, `edit_pdf`, etc), thus their documentation can be found in the API spec / viewer. Legacy document-editing methods via `/api/documents/bulk_edit/` are still supported for compatibility, are deprecated and clients should migrate to the individual endpoints before they are removed in a future version.
### Objects
@@ -371,16 +335,16 @@ operations, using the endpoint: `/api/bulk_edit_objects/`, which requires a json
The REST API is versioned.
- Versioning ensures that changes to the API don't break older
clients.
- Clients specify the specific version of the API they wish to use
with every request and Paperless will handle the request using the
specified API version.
- Even if the underlying data model changes, supported older API
versions continue to serve compatible data.
- If no version is specified, Paperless serves the configured default
API version (currently `10`).
- Supported API versions are currently `9` and `10`.
- Versioning ensures that changes to the API don't break older
clients.
- Clients specify the specific version of the API they wish to use
with every request and Paperless will handle the request using the
specified API version.
- Even if the underlying data model changes, supported older API
versions continue to serve compatible data.
- If no version is specified, Paperless serves the configured default
API version (currently `10`).
- Supported API versions are currently `9` and `10`.
API versions are specified by submitting an additional HTTP `Accept`
header with every request:
@@ -420,51 +384,56 @@ Initial API version.
#### Version 2
- Added field `Tag.color`. This read/write string field contains a hex
color such as `#a6cee3`.
- Added read-only field `Tag.text_color`. This field contains the text
color to use for a specific tag, which is either black or white
depending on the brightness of `Tag.color`.
- Removed field `Tag.colour`.
- Added field `Tag.color`. This read/write string field contains a hex
color such as `#a6cee3`.
- Added read-only field `Tag.text_color`. This field contains the text
color to use for a specific tag, which is either black or white
depending on the brightness of `Tag.color`.
- Removed field `Tag.colour`.
#### Version 3
- Permissions endpoints have been added.
- The format of the `/api/ui_settings/` has changed.
- Permissions endpoints have been added.
- The format of the `/api/ui_settings/` has changed.
#### Version 4
- Consumption templates were refactored to workflows and API endpoints
changed as such.
- Consumption templates were refactored to workflows and API endpoints
changed as such.
#### Version 5
- Added bulk deletion methods for documents and objects.
- Added bulk deletion methods for documents and objects.
#### Version 6
- Moved acknowledge tasks endpoint to be under `/api/tasks/acknowledge/`.
- Moved acknowledge tasks endpoint to be under `/api/tasks/acknowledge/`.
#### Version 7
- The format of select type custom fields has changed to return the options
as an array of objects with `id` and `label` fields as opposed to a simple
list of strings. When creating or updating a custom field value of a
document for a select type custom field, the value should be the `id` of
the option whereas previously was the index of the option.
- The format of select type custom fields has changed to return the options
as an array of objects with `id` and `label` fields as opposed to a simple
list of strings. When creating or updating a custom field value of a
document for a select type custom field, the value should be the `id` of
the option whereas previously was the index of the option.
#### Version 8
- The user field of document notes now returns a simplified user object
rather than just the user ID.
- The user field of document notes now returns a simplified user object
rather than just the user ID.
#### Version 9
- The document `created` field is now a date, not a datetime. The
`created_date` field is considered deprecated and will be removed in a
future version.
- The document `created` field is now a date, not a datetime. The
`created_date` field is considered deprecated and will be removed in a
future version.
#### Version 10
- The `show_on_dashboard` and `show_in_sidebar` fields of saved views have been
removed. Relevant settings are now stored in the UISettings model.
- The `show_on_dashboard` and `show_in_sidebar` fields of saved views have been
removed. Relevant settings are now stored in the UISettings model. Compatibility is maintained
for versions < 10 until support for API v9 is dropped.
- Document-editing operations such as `merge`, `rotate`, and `edit_pdf` have been
moved from the bulk edit endpoint to their own individual endpoints. Using these methods via
the bulk edit endpoint is still supported for compatibility with versions < 10 until support
for API v9 is dropped.

File diff suppressed because it is too large Load Diff

View File

@@ -8,17 +8,17 @@ common [OCR](#ocr) related settings and some frontend settings. If set, these wi
preference over the settings via environment variables. If not set, the environment setting
or applicable default will be utilized instead.
- If you run paperless on docker, `paperless.conf` is not used.
Rather, configure paperless by copying necessary options to
`docker-compose.env`.
- If you run paperless on docker, `paperless.conf` is not used.
Rather, configure paperless by copying necessary options to
`docker-compose.env`.
- If you are running paperless on anything else, paperless will search
for the configuration file in these locations and use the first one
it finds:
- The environment variable `PAPERLESS_CONFIGURATION_PATH`
- `/path/to/paperless/paperless.conf`
- `/etc/paperless.conf`
- `/usr/local/etc/paperless.conf`
- If you are running paperless on anything else, paperless will search
for the configuration file in these locations and use the first one
it finds:
- The environment variable `PAPERLESS_CONFIGURATION_PATH`
- `/path/to/paperless/paperless.conf`
- `/etc/paperless.conf`
- `/usr/local/etc/paperless.conf`
## Required services
@@ -1947,6 +1947,12 @@ current backend. If not supplied, defaults to "gpt-3.5-turbo" for OpenAI and "ll
Defaults to None.
#### [`PAPERLESS_AI_LLM_ALLOW_INTERNAL_ENDPOINTS=<bool>`](#PAPERLESS_AI_LLM_ALLOW_INTERNAL_ENDPOINTS) {#PAPERLESS_AI_LLM_ALLOW_INTERNAL_ENDPOINTS}
: If set to false, Paperless blocks AI endpoint URLs that resolve to non-public addresses (e.g., localhost, etc).
Defaults to true, which allows internal endpoints.
#### [`PAPERLESS_AI_LLM_INDEX_TASK_CRON=<cron expression>`](#PAPERLESS_AI_LLM_INDEX_TASK_CRON) {#PAPERLESS_AI_LLM_INDEX_TASK_CRON}
: Configures the schedule to update the AI embeddings of text content and metadata for all documents. Only performed if

View File

@@ -6,23 +6,23 @@ on Paperless-ngx.
Check out the source from GitHub. The repository is organized in the
following way:
- `main` always represents the latest release and will only see
changes when a new release is made.
- `dev` contains the code that will be in the next release.
- `feature-X` contains bigger changes that will be in some release, but
not necessarily the next one.
- `main` always represents the latest release and will only see
changes when a new release is made.
- `dev` contains the code that will be in the next release.
- `feature-X` contains bigger changes that will be in some release, but
not necessarily the next one.
When making functional changes to Paperless-ngx, _always_ make your changes
on the `dev` branch.
Apart from that, the folder structure is as follows:
- `docs/` - Documentation.
- `src-ui/` - Code of the front end.
- `src/` - Code of the back end.
- `scripts/` - Various scripts that help with different parts of
development.
- `docker/` - Files required to build the docker image.
- `docs/` - Documentation.
- `src-ui/` - Code of the front end.
- `src/` - Code of the back end.
- `scripts/` - Various scripts that help with different parts of
development.
- `docker/` - Files required to build the docker image.
## Contributing to Paperless-ngx
@@ -94,18 +94,17 @@ first-time setup.
```
7. You can now either ...
- install Redis or
- install Redis or
- use the included `scripts/start_services.sh` to use Docker to fire
up a Redis instance (and some other services such as Tika,
Gotenberg and a database server) or
- use the included `scripts/start_services.sh` to use Docker to fire
up a Redis instance (and some other services such as Tika,
Gotenberg and a database server) or
- spin up a bare Redis container
- spin up a bare Redis container
```bash
docker run -d -p 6379:6379 --restart unless-stopped redis:latest
```
```bash
docker run -d -p 6379:6379 --restart unless-stopped redis:latest
```
8. Continue with either back-end or front-end development or both :-).
@@ -118,9 +117,9 @@ work well for development, but you can use whatever you want.
Configure the IDE to use the `src/`-folder as the base source folder.
Configure the following launch configurations in your IDE:
- `uv run manage.py runserver`
- `uv run manage.py document_consumer`
- `uv run celery --app paperless worker -l DEBUG` (or any other log level)
- `uv run manage.py runserver`
- `uv run manage.py document_consumer`
- `uv run celery --app paperless worker -l DEBUG` (or any other log level)
To start them all:
@@ -146,11 +145,11 @@ pnpm ng build --configuration production
### Testing
- Run `pytest` in the `src/` directory to execute all tests. This also
generates a HTML coverage report. When running tests, `paperless.conf`
is loaded as well. However, the tests rely on the default
configuration. This is not ideal. But for now, make sure no settings
except for DEBUG are overridden when testing.
- Run `pytest` in the `src/` directory to execute all tests. This also
generates a HTML coverage report. When running tests, `paperless.conf`
is loaded as well. However, the tests rely on the default
configuration. This is not ideal. But for now, make sure no settings
except for DEBUG are overridden when testing.
!!! note
@@ -254,14 +253,14 @@ these parts have to be translated separately.
### Front end localization
- The AngularJS front end does localization according to the [Angular
documentation](https://angular.io/guide/i18n).
- The source language of the project is "en_US".
- The source strings end up in the file `src-ui/messages.xlf`.
- The translated strings need to be placed in the
`src-ui/src/locale/` folder.
- In order to extract added or changed strings from the source files,
call `ng extract-i18n`.
- The AngularJS front end does localization according to the [Angular
documentation](https://angular.io/guide/i18n).
- The source language of the project is "en_US".
- The source strings end up in the file `src-ui/messages.xlf`.
- The translated strings need to be placed in the
`src-ui/src/locale/` folder.
- In order to extract added or changed strings from the source files,
call `ng extract-i18n`.
Adding new languages requires adding the translated files in the
`src-ui/src/locale/` folder and adjusting a couple files.
@@ -307,18 +306,18 @@ A majority of the strings that appear in the back end appear only when
the admin is used. However, some of these are still shown on the front
end (such as error messages).
- The django application does localization according to the [Django
documentation](https://docs.djangoproject.com/en/3.1/topics/i18n/translation/).
- The source language of the project is "en_US".
- Localization files end up in the folder `src/locale/`.
- In order to extract strings from the application, call
`uv run manage.py makemessages -l en_US`. This is important after
making changes to translatable strings.
- The message files need to be compiled for them to show up in the
application. Call `uv run manage.py compilemessages` to do this.
The generated files don't get committed into git, since these are
derived artifacts. The build pipeline takes care of executing this
command.
- The django application does localization according to the [Django
documentation](https://docs.djangoproject.com/en/3.1/topics/i18n/translation/).
- The source language of the project is "en_US".
- Localization files end up in the folder `src/locale/`.
- In order to extract strings from the application, call
`uv run manage.py makemessages -l en_US`. This is important after
making changes to translatable strings.
- The message files need to be compiled for them to show up in the
application. Call `uv run manage.py compilemessages` to do this.
The generated files don't get committed into git, since these are
derived artifacts. The build pipeline takes care of executing this
command.
Adding new languages requires adding the translated files in the
`src/locale/`-folder and adjusting the file
@@ -381,10 +380,10 @@ base code.
Paperless-ngx uses parsers to add documents. A parser is
responsible for:
- Retrieving the content from the original
- Creating a thumbnail
- _optional:_ Retrieving a created date from the original
- _optional:_ Creating an archived document from the original
- Retrieving the content from the original
- Creating a thumbnail
- _optional:_ Retrieving a created date from the original
- _optional:_ Creating an archived document from the original
Custom parsers can be added to Paperless-ngx to support more file types. In
order to do that, you need to write the parser itself and announce its
@@ -442,17 +441,17 @@ def myparser_consumer_declaration(sender, **kwargs):
}
```
- `parser` is a reference to a class that extends `DocumentParser`.
- `weight` is used whenever two or more parsers are able to parse a
file: The parser with the higher weight wins. This can be used to
override the parsers provided by Paperless-ngx.
- `mime_types` is a dictionary. The keys are the mime types your
parser supports and the value is the default file extension that
Paperless-ngx should use when storing files and serving them for
download. We could guess that from the file extensions, but some
mime types have many extensions associated with them and the Python
methods responsible for guessing the extension do not always return
the same value.
- `parser` is a reference to a class that extends `DocumentParser`.
- `weight` is used whenever two or more parsers are able to parse a
file: The parser with the higher weight wins. This can be used to
override the parsers provided by Paperless-ngx.
- `mime_types` is a dictionary. The keys are the mime types your
parser supports and the value is the default file extension that
Paperless-ngx should use when storing files and serving them for
download. We could guess that from the file extensions, but some
mime types have many extensions associated with them and the Python
methods responsible for guessing the extension do not always return
the same value.
## Using Visual Studio Code devcontainer
@@ -471,9 +470,8 @@ To get started:
2. VS Code will prompt you with "Reopen in container". Do so and wait for the environment to start.
3. In case your host operating system is Windows:
- The Source Control view in Visual Studio Code might show: "The detected Git repository is potentially unsafe as the folder is owned by someone other than the current user." Use "Manage Unsafe Repositories" to fix this.
- Git might have detecteded modifications for all files, because Windows is using CRLF line endings. Run `git checkout .` in the containers terminal to fix this issue.
- The Source Control view in Visual Studio Code might show: "The detected Git repository is potentially unsafe as the folder is owned by someone other than the current user." Use "Manage Unsafe Repositories" to fix this.
- Git might have detecteded modifications for all files, because Windows is using CRLF line endings. Run `git checkout .` in the containers terminal to fix this issue.
4. Initialize the project by running the task **Project Setup: Run all Init Tasks**. This
will initialize the database tables and create a superuser. Then you can compile the front end
@@ -538,12 +536,12 @@ class MyDateParserPlugin(DateParserPluginBase):
Your parser instance is initialized with a `DateParserConfig` object accessible via `self.config`. This provides:
- `languages: list[str]` - List of language codes for date parsing
- `timezone_str: str` - Timezone string for date localization
- `ignore_dates: set[datetime.date]` - Dates that should be filtered out
- `reference_time: datetime.datetime` - Current time for filtering future dates
- `filename_date_order: str | None` - Date order preference for filenames (e.g., "DMY", "MDY")
- `content_date_order: str` - Date order preference for content
- `languages: list[str]` - List of language codes for date parsing
- `timezone_str: str` - Timezone string for date localization
- `ignore_dates: set[datetime.date]` - Dates that should be filtered out
- `reference_time: datetime.datetime` - Current time for filtering future dates
- `filename_date_order: str | None` - Date order preference for filenames (e.g., "DMY", "MDY")
- `content_date_order: str` - Date order preference for content
The base class provides two helper methods you can use:

View File

@@ -44,28 +44,28 @@ system. On Linux, chances are high that this location is
You can always drag those files out of that folder to use them
elsewhere. Here are a couple notes about that.
- Paperless-ngx never modifies your original documents. It keeps
checksums of all documents and uses a scheduled sanity checker to
check that they remain the same.
- By default, paperless uses the internal ID of each document as its
filename. This might not be very convenient for export. However, you
can adjust the way files are stored in paperless by
[configuring the filename format](advanced_usage.md#file-name-handling).
- [The exporter](administration.md#exporter) is
another easy way to get your files out of paperless with reasonable
file names.
- Paperless-ngx never modifies your original documents. It keeps
checksums of all documents and uses a scheduled sanity checker to
check that they remain the same.
- By default, paperless uses the internal ID of each document as its
filename. This might not be very convenient for export. However, you
can adjust the way files are stored in paperless by
[configuring the filename format](advanced_usage.md#file-name-handling).
- [The exporter](administration.md#exporter) is
another easy way to get your files out of paperless with reasonable
file names.
## _What file types does paperless-ngx support?_
**A:** Currently, the following files are supported:
- PDF documents, PNG images, JPEG images, TIFF images, GIF images and
WebP images are processed with OCR and converted into PDF documents.
- Plain text documents are supported as well and are added verbatim to
paperless.
- With the optional Tika integration enabled (see [Tika configuration](https://docs.paperless-ngx.com/configuration#tika)),
Paperless also supports various Office documents (.docx, .doc, odt,
.ppt, .pptx, .odp, .xls, .xlsx, .ods).
- PDF documents, PNG images, JPEG images, TIFF images, GIF images and
WebP images are processed with OCR and converted into PDF documents.
- Plain text documents are supported as well and are added verbatim to
paperless.
- With the optional Tika integration enabled (see [Tika configuration](https://docs.paperless-ngx.com/configuration#tika)),
Paperless also supports various Office documents (.docx, .doc, odt,
.ppt, .pptx, .odp, .xls, .xlsx, .ods).
Paperless-ngx determines the type of a file by inspecting its content
rather than its file extensions. However, files processed via the

View File

@@ -28,36 +28,36 @@ physical documents into a searchable online archive so you can keep, well, _less
## Features
- **Organize and index** your scanned documents with tags, correspondents, types, and more.
- _Your_ data is stored locally on _your_ server and is never transmitted or shared in any way, unless you explicitly choose to do so.
- Performs **OCR** on your documents, adding searchable and selectable text, even to documents scanned with only images.
- Utilizes the open-source Tesseract engine to recognize more than 100 languages.
- _New!_ Supports remote OCR with Azure AI (opt-in).
- Documents are saved as PDF/A format which is designed for long term storage, alongside the unaltered originals.
- Uses machine-learning to automatically add tags, correspondents and document types to your documents.
- **New**: Paperless-ngx can now leverage AI (Large Language Models or LLMs) for document suggestions. This is an optional feature that can be enabled (and is disabled by default).
- Supports PDF documents, images, plain text files, Office documents (Word, Excel, PowerPoint, and LibreOffice equivalents)[^1] and more.
- Paperless stores your documents plain on disk. Filenames and folders are managed by paperless and their format can be configured freely with different configurations assigned to different documents.
- **Beautiful, modern web application** that features:
- Customizable dashboard with statistics.
- Filtering by tags, correspondents, types, and more.
- Bulk editing of tags, correspondents, types and more.
- Drag-and-drop uploading of documents throughout the app.
- Customizable views can be saved and displayed on the dashboard and / or sidebar.
- Support for custom fields of various data types.
- Shareable public links with optional expiration.
- **Full text search** helps you find what you need:
- Auto completion suggests relevant words from your documents.
- Results are sorted by relevance to your search query.
- Highlighting shows you which parts of the document matched the query.
- Searching for similar documents ("More like this")
- **Email processing**[^1]: import documents from your email accounts:
- Configure multiple accounts and rules for each account.
- After processing, paperless can perform actions on the messages such as marking as read, deleting and more.
- A built-in robust **multi-user permissions** system that supports 'global' permissions as well as per document or object.
- A powerful workflow system that gives you even more control.
- **Optimized** for multi core systems: Paperless-ngx consumes multiple documents in parallel.
- The integrated sanity checker makes sure that your document archive is in good health.
- **Organize and index** your scanned documents with tags, correspondents, types, and more.
- _Your_ data is stored locally on _your_ server and is never transmitted or shared in any way, unless you explicitly choose to do so.
- Performs **OCR** on your documents, adding searchable and selectable text, even to documents scanned with only images.
- Utilizes the open-source Tesseract engine to recognize more than 100 languages.
- _New!_ Supports remote OCR with Azure AI (opt-in).
- Documents are saved as PDF/A format which is designed for long term storage, alongside the unaltered originals.
- Uses machine-learning to automatically add tags, correspondents and document types to your documents.
- **New**: Paperless-ngx can now leverage AI (Large Language Models or LLMs) for document suggestions. This is an optional feature that can be enabled (and is disabled by default).
- Supports PDF documents, images, plain text files, Office documents (Word, Excel, PowerPoint, and LibreOffice equivalents)[^1] and more.
- Paperless stores your documents plain on disk. Filenames and folders are managed by paperless and their format can be configured freely with different configurations assigned to different documents.
- **Beautiful, modern web application** that features:
- Customizable dashboard with statistics.
- Filtering by tags, correspondents, types, and more.
- Bulk editing of tags, correspondents, types and more.
- Drag-and-drop uploading of documents throughout the app.
- Customizable views can be saved and displayed on the dashboard and / or sidebar.
- Support for custom fields of various data types.
- Shareable public links with optional expiration.
- **Full text search** helps you find what you need:
- Auto completion suggests relevant words from your documents.
- Results are sorted by relevance to your search query.
- Highlighting shows you which parts of the document matched the query.
- Searching for similar documents ("More like this")
- **Email processing**[^1]: import documents from your email accounts:
- Configure multiple accounts and rules for each account.
- After processing, paperless can perform actions on the messages such as marking as read, deleting and more.
- A built-in robust **multi-user permissions** system that supports 'global' permissions as well as per document or object.
- A powerful workflow system that gives you even more control.
- **Optimized** for multi core systems: Paperless-ngx consumes multiple documents in parallel.
- The integrated sanity checker makes sure that your document archive is in good health.
[^1]: Office document and email consumption support is optional and provided by Apache Tika (see [configuration](https://docs.paperless-ngx.com/configuration/#tika))

View File

@@ -42,12 +42,12 @@ The `CONSUMER_BARCODE_SCANNER` setting has been removed. zxing-cpp is now the on
### Action Required
- If you were already using `CONSUMER_BARCODE_SCANNER=ZXING`, simply remove the setting.
- If you had `CONSUMER_BARCODE_SCANNER=PYZBAR` or were using the default, no functional changes are needed beyond
removing the setting. zxing-cpp supports all the same barcode formats and you should see improved detection
reliability.
- The `libzbar0` / `libzbar-dev` system packages are no longer required and can be removed from any custom Docker
images or host installations.
- If you were already using `CONSUMER_BARCODE_SCANNER=ZXING`, simply remove the setting.
- If you had `CONSUMER_BARCODE_SCANNER=PYZBAR` or were using the default, no functional changes are needed beyond
removing the setting. zxing-cpp supports all the same barcode formats and you should see improved detection
reliability.
- The `libzbar0` / `libzbar-dev` system packages are no longer required and can be removed from any custom Docker
images or host installations.
## Database Engine

View File

@@ -44,8 +44,8 @@ account. In short, it automates the [Docker Compose setup](#docker) described be
#### Prerequisites
- Docker and Docker Compose must be [installed](https://docs.docker.com/engine/install/){:target="\_blank"}.
- macOS users will need [GNU sed](https://formulae.brew.sh/formula/gnu-sed) with support for running as `sed` as well as [wget](https://formulae.brew.sh/formula/wget).
- Docker and Docker Compose must be [installed](https://docs.docker.com/engine/install/){:target="\_blank"}.
- macOS users will need [GNU sed](https://formulae.brew.sh/formula/gnu-sed) with support for running as `sed` as well as [wget](https://formulae.brew.sh/formula/wget).
#### Run the installation script
@@ -63,7 +63,7 @@ credentials you provided during the installation script.
#### Prerequisites
- Docker and Docker Compose must be [installed](https://docs.docker.com/engine/install/){:target="\_blank"}.
- Docker and Docker Compose must be [installed](https://docs.docker.com/engine/install/){:target="\_blank"}.
#### Installation
@@ -101,7 +101,7 @@ credentials you provided during the installation script.
```yaml
ports:
- 8010:8000
- 8010:8000
```
3. Modify `docker-compose.env` with any configuration options you need.
@@ -145,11 +145,11 @@ a [superuser](usage.md#superusers) account.
If you want to run Paperless as a rootless container, make this
change in `docker-compose.yml`:
- Set the `user` running the container to map to the `paperless`
user in the container. This value (`user_id` below) should be
the same ID that `USERMAP_UID` and `USERMAP_GID` are set to in
`docker-compose.env`. See `USERMAP_UID` and `USERMAP_GID`
[here](configuration.md#docker).
- Set the `user` running the container to map to the `paperless`
user in the container. This value (`user_id` below) should be
the same ID that `USERMAP_UID` and `USERMAP_GID` are set to in
`docker-compose.env`. See `USERMAP_UID` and `USERMAP_GID`
[here](configuration.md#docker).
Your entry for Paperless should contain something like:
@@ -171,26 +171,25 @@ to enable polling and disable inotify. See [here](configuration.md#polling).
#### Prerequisites
- Paperless runs on Linux only, Windows is not supported.
- Python 3.11, 3.12, 3.13, or 3.14 is required. As a policy, Paperless-ngx aims to support at least the three most recent Python versions and drops support for versions as they reach end-of-life. Newer versions may work, but some dependencies may not be fully compatible.
- Paperless runs on Linux only, Windows is not supported.
- Python 3.11, 3.12, 3.13, or 3.14 is required. As a policy, Paperless-ngx aims to support at least the three most recent Python versions and drops support for versions as they reach end-of-life. Newer versions may work, but some dependencies may not be fully compatible.
#### Installation
1. Install dependencies. Paperless requires the following packages:
- `python3`
- `python3-pip`
- `python3-dev`
- `default-libmysqlclient-dev` for MariaDB
- `pkg-config` for mysqlclient (python dependency)
- `fonts-liberation` for generating thumbnails for plain text
files
- `imagemagick` >= 6 for PDF conversion
- `gnupg` for handling encrypted documents
- `libpq-dev` for PostgreSQL
- `libmagic-dev` for mime type detection
- `mariadb-client` for MariaDB compile time
- `poppler-utils` for barcode detection
- `python3`
- `python3-pip`
- `python3-dev`
- `default-libmysqlclient-dev` for MariaDB
- `pkg-config` for mysqlclient (python dependency)
- `fonts-liberation` for generating thumbnails for plain text
files
- `imagemagick` >= 6 for PDF conversion
- `gnupg` for handling encrypted documents
- `libpq-dev` for PostgreSQL
- `libmagic-dev` for mime type detection
- `mariadb-client` for MariaDB compile time
- `poppler-utils` for barcode detection
Use this list for your preferred package management:
@@ -200,18 +199,17 @@ to enable polling and disable inotify. See [here](configuration.md#polling).
These dependencies are required for OCRmyPDF, which is used for text
recognition.
- `unpaper`
- `ghostscript`
- `icc-profiles-free`
- `qpdf`
- `liblept5`
- `libxml2`
- `pngquant` (suggested for certain PDF image optimizations)
- `zlib1g`
- `tesseract-ocr` >= 4.0.0 for OCR
- `tesseract-ocr` language packs (`tesseract-ocr-eng`,
`tesseract-ocr-deu`, etc)
- `unpaper`
- `ghostscript`
- `icc-profiles-free`
- `qpdf`
- `liblept5`
- `libxml2`
- `pngquant` (suggested for certain PDF image optimizations)
- `zlib1g`
- `tesseract-ocr` >= 4.0.0 for OCR
- `tesseract-ocr` language packs (`tesseract-ocr-eng`,
`tesseract-ocr-deu`, etc)
Use this list for your preferred package management:
@@ -220,16 +218,14 @@ to enable polling and disable inotify. See [here](configuration.md#polling).
```
On Raspberry Pi, these libraries are required as well:
- `libatlas-base-dev`
- `libxslt1-dev`
- `mime-support`
- `libatlas-base-dev`
- `libxslt1-dev`
- `mime-support`
You will also need these for installing some of the python dependencies:
- `build-essential`
- `python3-setuptools`
- `python3-wheel`
- `build-essential`
- `python3-setuptools`
- `python3-wheel`
Use this list for your preferred package management:
@@ -279,44 +275,41 @@ to enable polling and disable inotify. See [here](configuration.md#polling).
6. Configure Paperless-ngx. See [configuration](configuration.md) for details.
Edit the included `paperless.conf` and adjust the settings to your
needs. Required settings for getting Paperless-ngx running are:
- [`PAPERLESS_REDIS`](configuration.md#PAPERLESS_REDIS) should point to your Redis server, such as
`redis://localhost:6379`.
- [`PAPERLESS_DBENGINE`](configuration.md#PAPERLESS_DBENGINE) is optional, and should be one of `postgres`,
`mariadb`, or `sqlite`
- [`PAPERLESS_DBHOST`](configuration.md#PAPERLESS_DBHOST) should be the hostname on which your
PostgreSQL server is running. Do not configure this to use
SQLite instead. Also configure port, database name, user and
password as necessary.
- [`PAPERLESS_CONSUMPTION_DIR`](configuration.md#PAPERLESS_CONSUMPTION_DIR) should point to the folder
that Paperless-ngx should watch for incoming documents.
Likewise, [`PAPERLESS_DATA_DIR`](configuration.md#PAPERLESS_DATA_DIR) and
[`PAPERLESS_MEDIA_ROOT`](configuration.md#PAPERLESS_MEDIA_ROOT) define where Paperless-ngx stores its data.
If needed, these can point to the same directory.
- [`PAPERLESS_SECRET_KEY`](configuration.md#PAPERLESS_SECRET_KEY) should be a random sequence of
characters. It's used for authentication. Failure to do so
allows third parties to forge authentication credentials.
- Set [`PAPERLESS_URL`](configuration.md#PAPERLESS_URL) if you are behind a reverse proxy. This should
point to your domain. Please see
[configuration](configuration.md) for more
information.
- [`PAPERLESS_REDIS`](configuration.md#PAPERLESS_REDIS) should point to your Redis server, such as
`redis://localhost:6379`.
- [`PAPERLESS_DBENGINE`](configuration.md#PAPERLESS_DBENGINE) is optional, and should be one of `postgres`,
`mariadb`, or `sqlite`
- [`PAPERLESS_DBHOST`](configuration.md#PAPERLESS_DBHOST) should be the hostname on which your
PostgreSQL server is running. Do not configure this to use
SQLite instead. Also configure port, database name, user and
password as necessary.
- [`PAPERLESS_CONSUMPTION_DIR`](configuration.md#PAPERLESS_CONSUMPTION_DIR) should point to the folder
that Paperless-ngx should watch for incoming documents.
Likewise, [`PAPERLESS_DATA_DIR`](configuration.md#PAPERLESS_DATA_DIR) and
[`PAPERLESS_MEDIA_ROOT`](configuration.md#PAPERLESS_MEDIA_ROOT) define where Paperless-ngx stores its data.
If needed, these can point to the same directory.
- [`PAPERLESS_SECRET_KEY`](configuration.md#PAPERLESS_SECRET_KEY) should be a random sequence of
characters. It's used for authentication. Failure to do so
allows third parties to forge authentication credentials.
- Set [`PAPERLESS_URL`](configuration.md#PAPERLESS_URL) if you are behind a reverse proxy. This should
point to your domain. Please see
[configuration](configuration.md) for more
information.
You can make many more adjustments, especially for OCR.
The following options are recommended for most users:
- Set [`PAPERLESS_OCR_LANGUAGE`](configuration.md#PAPERLESS_OCR_LANGUAGE) to the language most of your
documents are written in.
- Set [`PAPERLESS_TIME_ZONE`](configuration.md#PAPERLESS_TIME_ZONE) to your local time zone.
- Set [`PAPERLESS_OCR_LANGUAGE`](configuration.md#PAPERLESS_OCR_LANGUAGE) to the language most of your
documents are written in.
- Set [`PAPERLESS_TIME_ZONE`](configuration.md#PAPERLESS_TIME_ZONE) to your local time zone.
!!! warning
Ensure your Redis instance [is secured](https://redis.io/docs/latest/operate/oss_and_stack/management/security/).
7. Create the following directories if they do not already exist:
- `/opt/paperless/media`
- `/opt/paperless/data`
- `/opt/paperless/consume`
- `/opt/paperless/media`
- `/opt/paperless/data`
- `/opt/paperless/consume`
Adjust these paths if you configured different folders.
Then verify that the `paperless` user has write permissions:
@@ -391,11 +384,10 @@ to enable polling and disable inotify. See [here](configuration.md#polling).
starting point.
Paperless needs:
- The `webserver` script to run the webserver.
- The `consumer` script to watch the input folder.
- The `taskqueue` script for background workers (document consumption, etc.).
- The `scheduler` script for periodic tasks such as email checking.
- The `webserver` script to run the webserver.
- The `consumer` script to watch the input folder.
- The `taskqueue` script for background workers (document consumption, etc.).
- The `scheduler` script for periodic tasks such as email checking.
!!! note
@@ -501,19 +493,19 @@ your setup depending on how you installed Paperless.
This section describes how to update an existing Paperless Docker
installation. Keep these points in mind:
- Read the [changelog](changelog.md) and
take note of breaking changes.
- Decide whether to stay on SQLite or migrate to PostgreSQL.
Both work fine with Paperless-ngx.
However, if you already have a database server running
for other services, you might as well use it for Paperless as well.
- The task scheduler of Paperless, which is used to execute periodic
tasks such as email checking and maintenance, requires a
[Redis](https://redis.io/) message broker instance. The
Docker Compose route takes care of that.
- The layout of the folder structure for your documents and data
remains the same, so you can plug your old Docker volumes into
paperless-ngx and expect it to find everything where it should be.
- Read the [changelog](changelog.md) and
take note of breaking changes.
- Decide whether to stay on SQLite or migrate to PostgreSQL.
Both work fine with Paperless-ngx.
However, if you already have a database server running
for other services, you might as well use it for Paperless as well.
- The task scheduler of Paperless, which is used to execute periodic
tasks such as email checking and maintenance, requires a
[Redis](https://redis.io/) message broker instance. The
Docker Compose route takes care of that.
- The layout of the folder structure for your documents and data
remains the same, so you can plug your old Docker volumes into
paperless-ngx and expect it to find everything where it should be.
Migration to Paperless-ngx is then performed in a few simple steps:
@@ -598,7 +590,6 @@ commands as well.
1. Stop and remove the Paperless container.
2. If using an external database, stop that container.
3. Update Redis configuration.
1. If `REDIS_URL` is already set, change it to [`PAPERLESS_REDIS`](configuration.md#PAPERLESS_REDIS)
and continue to step 4.
@@ -610,22 +601,18 @@ commands as well.
the new Redis container.
4. Update user mapping.
1. If set, change the environment variable `PUID` to `USERMAP_UID`.
1. If set, change the environment variable `PGID` to `USERMAP_GID`.
5. Update configuration paths.
1. Set the environment variable [`PAPERLESS_DATA_DIR`](configuration.md#PAPERLESS_DATA_DIR) to `/config`.
6. Update media paths.
1. Set the environment variable [`PAPERLESS_MEDIA_ROOT`](configuration.md#PAPERLESS_MEDIA_ROOT) to
`/data/media`.
7. Update timezone.
1. Set the environment variable [`PAPERLESS_TIME_ZONE`](configuration.md#PAPERLESS_TIME_ZONE) to the same
value as `TZ`.
@@ -639,33 +626,33 @@ commands as well.
Paperless runs on Raspberry Pi. Some tasks can be slow on lower-powered
hardware, but a few settings can improve performance:
- Stick with SQLite to save some resources. See [troubleshooting](troubleshooting.md#log-reports-creating-paperlesstask-failed)
if you encounter issues with SQLite locking.
- If you do not need the filesystem-based consumer, consider disabling it
entirely by setting [`PAPERLESS_CONSUMER_DISABLE`](configuration.md#PAPERLESS_CONSUMER_DISABLE) to `true`.
- Consider setting [`PAPERLESS_OCR_PAGES`](configuration.md#PAPERLESS_OCR_PAGES) to 1, so that Paperless
OCRs only the first page of your documents. In most cases, this page
contains enough information to be able to find it.
- [`PAPERLESS_TASK_WORKERS`](configuration.md#PAPERLESS_TASK_WORKERS) and [`PAPERLESS_THREADS_PER_WORKER`](configuration.md#PAPERLESS_THREADS_PER_WORKER) are
configured to use all cores. The Raspberry Pi models 3 and up have 4
cores, meaning that Paperless will use 2 workers and 2 threads per
worker. This may result in sluggish response times during
consumption, so you might want to lower these settings (example: 2
workers and 1 thread to always have some computing power left for
other tasks).
- Keep [`PAPERLESS_OCR_MODE`](configuration.md#PAPERLESS_OCR_MODE) at its default value `skip` and consider
OCRing your documents before feeding them into Paperless. Some
scanners are able to do this!
- Set [`PAPERLESS_OCR_SKIP_ARCHIVE_FILE`](configuration.md#PAPERLESS_OCR_SKIP_ARCHIVE_FILE) to `with_text` to skip archive
file generation for already OCRed documents, or `always` to skip it
for all documents.
- If you want to perform OCR on the device, consider using
`PAPERLESS_OCR_CLEAN=none`. This will speed up OCR times and use
less memory at the expense of slightly worse OCR results.
- If using Docker, consider setting [`PAPERLESS_WEBSERVER_WORKERS`](configuration.md#PAPERLESS_WEBSERVER_WORKERS) to 1. This will save some memory.
- Consider setting [`PAPERLESS_ENABLE_NLTK`](configuration.md#PAPERLESS_ENABLE_NLTK) to false, to disable the
more advanced language processing, which can take more memory and
processing time.
- Stick with SQLite to save some resources. See [troubleshooting](troubleshooting.md#log-reports-creating-paperlesstask-failed)
if you encounter issues with SQLite locking.
- If you do not need the filesystem-based consumer, consider disabling it
entirely by setting [`PAPERLESS_CONSUMER_DISABLE`](configuration.md#PAPERLESS_CONSUMER_DISABLE) to `true`.
- Consider setting [`PAPERLESS_OCR_PAGES`](configuration.md#PAPERLESS_OCR_PAGES) to 1, so that Paperless
OCRs only the first page of your documents. In most cases, this page
contains enough information to be able to find it.
- [`PAPERLESS_TASK_WORKERS`](configuration.md#PAPERLESS_TASK_WORKERS) and [`PAPERLESS_THREADS_PER_WORKER`](configuration.md#PAPERLESS_THREADS_PER_WORKER) are
configured to use all cores. The Raspberry Pi models 3 and up have 4
cores, meaning that Paperless will use 2 workers and 2 threads per
worker. This may result in sluggish response times during
consumption, so you might want to lower these settings (example: 2
workers and 1 thread to always have some computing power left for
other tasks).
- Keep [`PAPERLESS_OCR_MODE`](configuration.md#PAPERLESS_OCR_MODE) at its default value `skip` and consider
OCRing your documents before feeding them into Paperless. Some
scanners are able to do this!
- Set [`PAPERLESS_OCR_SKIP_ARCHIVE_FILE`](configuration.md#PAPERLESS_OCR_SKIP_ARCHIVE_FILE) to `with_text` to skip archive
file generation for already OCRed documents, or `always` to skip it
for all documents.
- If you want to perform OCR on the device, consider using
`PAPERLESS_OCR_CLEAN=none`. This will speed up OCR times and use
less memory at the expense of slightly worse OCR results.
- If using Docker, consider setting [`PAPERLESS_WEBSERVER_WORKERS`](configuration.md#PAPERLESS_WEBSERVER_WORKERS) to 1. This will save some memory.
- Consider setting [`PAPERLESS_ENABLE_NLTK`](configuration.md#PAPERLESS_ENABLE_NLTK) to false, to disable the
more advanced language processing, which can take more memory and
processing time.
For details, refer to [configuration](configuration.md).

View File

@@ -4,27 +4,27 @@
Check for the following issues:
- Ensure that the directory you're putting your documents in is the
folder paperless is watching. With docker, this setting is performed
in the `docker-compose.yml` file. Without Docker, look at the
`CONSUMPTION_DIR` setting. Don't adjust this setting if you're
using docker.
- Ensure that the directory you're putting your documents in is the
folder paperless is watching. With docker, this setting is performed
in the `docker-compose.yml` file. Without Docker, look at the
`CONSUMPTION_DIR` setting. Don't adjust this setting if you're
using docker.
- Ensure that redis is up and running. Paperless does its task
processing asynchronously, and for documents to arrive at the task
processor, it needs redis to run.
- Ensure that redis is up and running. Paperless does its task
processing asynchronously, and for documents to arrive at the task
processor, it needs redis to run.
- Ensure that the task processor is running. Docker does this
automatically. Manually invoke the task processor by executing
- Ensure that the task processor is running. Docker does this
automatically. Manually invoke the task processor by executing
```shell-session
celery --app paperless worker
```
```shell-session
celery --app paperless worker
```
- Look at the output of paperless and inspect it for any errors.
- Look at the output of paperless and inspect it for any errors.
- Go to the admin interface, and check if there are failed tasks. If
so, the tasks will contain an error message.
- Go to the admin interface, and check if there are failed tasks. If
so, the tasks will contain an error message.
## Consumer warns `OCR for XX failed`
@@ -78,12 +78,12 @@ Ensure that `chown` is possible on these directories.
This indicates that the Auto matching algorithm found no documents to
learn from. This may have two reasons:
- You don't use the Auto matching algorithm: The error can be safely
ignored in this case.
- You are using the Auto matching algorithm: The classifier explicitly
excludes documents with Inbox tags. Verify that there are documents
in your archive without inbox tags. The algorithm will only learn
from documents not in your inbox.
- You don't use the Auto matching algorithm: The error can be safely
ignored in this case.
- You are using the Auto matching algorithm: The classifier explicitly
excludes documents with Inbox tags. Verify that there are documents
in your archive without inbox tags. The algorithm will only learn
from documents not in your inbox.
## UserWarning in sklearn on every single document
@@ -127,10 +127,10 @@ change in the `docker-compose.yml` file:
# The gotenberg chromium route is used to convert .eml files. We do not
# want to allow external content like tracking pixels or even javascript.
command:
- 'gotenberg'
- '--chromium-disable-javascript=true'
- '--chromium-allow-list=file:///tmp/.*'
- '--api-timeout=60s'
- 'gotenberg'
- '--chromium-disable-javascript=true'
- '--chromium-allow-list=file:///tmp/.*'
- '--api-timeout=60s'
```
## Permission denied errors in the consumption directory

View File

@@ -14,42 +14,42 @@ for finding and managing your documents.
Paperless essentially consists of two different parts for managing your
documents:
- The _consumer_ watches a specified folder and adds all documents in
that folder to paperless.
- The _web server_ (web UI) provides a UI that you use to manage and
search documents.
- The _consumer_ watches a specified folder and adds all documents in
that folder to paperless.
- The _web server_ (web UI) provides a UI that you use to manage and
search documents.
Each document has data fields that you can assign to them:
- A _Document_ is a piece of paper that sometimes contains valuable
information.
- The _correspondent_ of a document is the person, institution or
company that a document either originates from, or is sent to.
- A _tag_ is a label that you can assign to documents. Think of labels
as more powerful folders: Multiple documents can be grouped together
with a single tag, however, a single document can also have multiple
tags. This is not possible with folders. The reason folders are not
implemented in paperless is simply that tags are much more versatile
than folders.
- A _document type_ is used to demarcate the type of a document such
as letter, bank statement, invoice, contract, etc. It is used to
identify what a document is about.
- The document _storage path_ is the location where the document files
are stored. See [Storage Paths](advanced_usage.md#storage-paths) for
more information.
- The _date added_ of a document is the date the document was scanned
into paperless. You cannot and should not change this date.
- The _date created_ of a document is the date the document was
initially issued. This can be the date you bought a product, the
date you signed a contract, or the date a letter was sent to you.
- The _archive serial number_ (short: ASN) of a document is the
identifier of the document in your physical document binders. See
[recommended workflow](#usage-recommended-workflow) below.
- The _content_ of a document is the text that was OCR'ed from the
document. This text is fed into the search engine and is used for
matching tags, correspondents and document types.
- Paperless-ngx also supports _custom fields_ which can be used to
store additional metadata about a document.
- A _Document_ is a piece of paper that sometimes contains valuable
information.
- The _correspondent_ of a document is the person, institution or
company that a document either originates from, or is sent to.
- A _tag_ is a label that you can assign to documents. Think of labels
as more powerful folders: Multiple documents can be grouped together
with a single tag, however, a single document can also have multiple
tags. This is not possible with folders. The reason folders are not
implemented in paperless is simply that tags are much more versatile
than folders.
- A _document type_ is used to demarcate the type of a document such
as letter, bank statement, invoice, contract, etc. It is used to
identify what a document is about.
- The document _storage path_ is the location where the document files
are stored. See [Storage Paths](advanced_usage.md#storage-paths) for
more information.
- The _date added_ of a document is the date the document was scanned
into paperless. You cannot and should not change this date.
- The _date created_ of a document is the date the document was
initially issued. This can be the date you bought a product, the
date you signed a contract, or the date a letter was sent to you.
- The _archive serial number_ (short: ASN) of a document is the
identifier of the document in your physical document binders. See
[recommended workflow](#usage-recommended-workflow) below.
- The _content_ of a document is the text that was OCR'ed from the
document. This text is fed into the search engine and is used for
matching tags, correspondents and document types.
- Paperless-ngx also supports _custom fields_ which can be used to
store additional metadata about a document.
## The Web UI
@@ -93,12 +93,12 @@ download the document or share it via a share link.
Think of versions as **file history** for a document.
- Versions track the underlying file and extracted text content (OCR/text).
- Metadata such as tags, correspondent, document type, storage path and custom fields stay on the "root" document.
- Version files follow normal filename formatting (including storage paths) and add a `_vN` suffix (for example `_v1`, `_v2`).
- By default, search and document content use the latest version.
- In document detail, selecting a version switches the preview, file metadata and content (and download etc buttons) to that version.
- Deleting a non-root version keeps metadata and falls back to the latest remaining version.
- Versions track the underlying file and extracted text content (OCR/text).
- Metadata such as tags, correspondent, document type, storage path and custom fields stay on the "root" document.
- Version files follow normal filename formatting (including storage paths) and add a `_vN` suffix (for example `_v1`, `_v2`).
- By default, search and document content use the latest version.
- In document detail, selecting a version switches the preview, file metadata and content (and download etc buttons) to that version.
- Deleting a non-root version keeps metadata and falls back to the latest remaining version.
### Management Lists
@@ -218,21 +218,20 @@ patterns can include wildcards and multiple patterns separated by a comma.
The actions all ensure that the same mail is not consumed twice by
different means. These are as follows:
- **Delete:** Immediately deletes mail that paperless has consumed
documents from. Use with caution.
- **Mark as read:** Mark consumed mail as read. Paperless will not
consume documents from already read mails. If you read a mail before
paperless sees it, it will be ignored.
- **Flag:** Sets the 'important' flag on mails with consumed
documents. Paperless will not consume flagged mails.
- **Move to folder:** Moves consumed mails out of the way so that
paperless won't consume them again.
- **Add custom Tag:** Adds a custom tag to mails with consumed
documents (the IMAP standard calls these "keywords"). Paperless
will not consume mails already tagged. Not all mail servers support
this feature!
- **Apple Mail support:** Apple Mail clients allow differently colored tags. For this to work use `apple:<color>` (e.g. _apple:green_) as a custom tag. Available colors are _red_, _orange_, _yellow_, _blue_, _green_, _violet_ and _grey_.
- **Delete:** Immediately deletes mail that paperless has consumed
documents from. Use with caution.
- **Mark as read:** Mark consumed mail as read. Paperless will not
consume documents from already read mails. If you read a mail before
paperless sees it, it will be ignored.
- **Flag:** Sets the 'important' flag on mails with consumed
documents. Paperless will not consume flagged mails.
- **Move to folder:** Moves consumed mails out of the way so that
paperless won't consume them again.
- **Add custom Tag:** Adds a custom tag to mails with consumed
documents (the IMAP standard calls these "keywords"). Paperless
will not consume mails already tagged. Not all mail servers support
this feature!
- **Apple Mail support:** Apple Mail clients allow differently colored tags. For this to work use `apple:<color>` (e.g. _apple:green_) as a custom tag. Available colors are _red_, _orange_, _yellow_, _blue_, _green_, _violet_ and _grey_.
!!! warning
@@ -325,12 +324,12 @@ or using [email](#workflow-action-email) or [webhook](#workflow-action-webhook)
"Share links" are public links to files (or an archive of files) and can be created and managed under the 'Send' button on the document detail screen or from the bulk editor.
- Share links do not require a user to login and thus link directly to a file or bundled download.
- Links are unique and are of the form `{paperless-url}/share/{randomly-generated-slug}`.
- Links can optionally have an expiration time set.
- After a link expires or is deleted users will be redirected to the regular paperless-ngx login.
- From the document detail screen you can create a share link for that single document.
- From the bulk editor you can create a **share link bundle** for any selection. Paperless-ngx prepares a ZIP archive in the background and exposes a single share link. You can revisit the "Manage share link bundles" dialog to monitor progress, retry failed bundles, or delete links.
- Share links do not require a user to login and thus link directly to a file or bundled download.
- Links are unique and are of the form `{paperless-url}/share/{randomly-generated-slug}`.
- Links can optionally have an expiration time set.
- After a link expires or is deleted users will be redirected to the regular paperless-ngx login.
- From the document detail screen you can create a share link for that single document.
- From the bulk editor you can create a **share link bundle** for any selection. Paperless-ngx prepares a ZIP archive in the background and exposes a single share link. You can revisit the "Manage share link bundles" dialog to monitor progress, retry failed bundles, or delete links.
!!! tip
@@ -514,25 +513,25 @@ flowchart TD
Workflows allow you to filter by:
- Source, e.g. documents uploaded via consume folder, API (& the web UI) and mail fetch
- File name, including wildcards e.g. \*.pdf will apply to all pdfs.
- File path, including wildcards. Note that enabling `PAPERLESS_CONSUMER_RECURSIVE` would allow, for
example, automatically assigning documents to different owners based on the upload directory.
- Mail rule. Choosing this option will force 'mail fetch' to be the workflow source.
- Content matching (`Added`, `Updated` and `Scheduled` triggers only). Filter document content using the matching settings.
- Source, e.g. documents uploaded via consume folder, API (& the web UI) and mail fetch
- File name, including wildcards e.g. \*.pdf will apply to all pdfs.
- File path, including wildcards. Note that enabling `PAPERLESS_CONSUMER_RECURSIVE` would allow, for
example, automatically assigning documents to different owners based on the upload directory.
- Mail rule. Choosing this option will force 'mail fetch' to be the workflow source.
- Content matching (`Added`, `Updated` and `Scheduled` triggers only). Filter document content using the matching settings.
There are also 'advanced' filters available for `Added`, `Updated` and `Scheduled` triggers:
- Any Tags: Filter for documents with any of the specified tags.
- All Tags: Filter for documents with all of the specified tags.
- No Tags: Filter for documents with none of the specified tags.
- Document type: Filter documents with this document type.
- Not Document types: Filter documents without any of these document types.
- Correspondent: Filter documents with this correspondent.
- Not Correspondents: Filter documents without any of these correspondents.
- Storage path: Filter documents with this storage path.
- Not Storage paths: Filter documents without any of these storage paths.
- Custom field query: Filter documents with a custom field query (the same as used for the document list filters).
- Any Tags: Filter for documents with any of the specified tags.
- All Tags: Filter for documents with all of the specified tags.
- No Tags: Filter for documents with none of the specified tags.
- Document type: Filter documents with this document type.
- Not Document types: Filter documents without any of these document types.
- Correspondent: Filter documents with this correspondent.
- Not Correspondents: Filter documents without any of these correspondents.
- Storage path: Filter documents with this storage path.
- Not Storage paths: Filter documents without any of these storage paths.
- Custom field query: Filter documents with a custom field query (the same as used for the document list filters).
### Workflow Actions
@@ -544,37 +543,37 @@ The following workflow action types are available:
"Assignment" actions can assign:
- Title, see [workflow placeholders](usage.md#workflow-placeholders) below
- Tags, correspondent, document type and storage path
- Document owner
- View and / or edit permissions to users or groups
- Custom fields. Note that no value for the field will be set
- Title, see [workflow placeholders](usage.md#workflow-placeholders) below
- Tags, correspondent, document type and storage path
- Document owner
- View and / or edit permissions to users or groups
- Custom fields. Note that no value for the field will be set
##### Removal {#workflow-action-removal}
"Removal" actions can remove either all of or specific sets of the following:
- Tags, correspondents, document types or storage paths
- Document owner
- View and / or edit permissions
- Custom fields
- Tags, correspondents, document types or storage paths
- Document owner
- View and / or edit permissions
- Custom fields
##### Email {#workflow-action-email}
"Email" actions can send documents via email. This action requires a mail server to be [configured](configuration.md#email-sending). You can specify:
- The recipient email address(es) separated by commas
- The subject and body of the email, which can include placeholders, see [placeholders](usage.md#workflow-placeholders) below
- Whether to include the document as an attachment
- The recipient email address(es) separated by commas
- The subject and body of the email, which can include placeholders, see [placeholders](usage.md#workflow-placeholders) below
- Whether to include the document as an attachment
##### Webhook {#workflow-action-webhook}
"Webhook" actions send a POST request to a specified URL. You can specify:
- The URL to send the request to
- The request body as text or as key-value pairs, which can include placeholders, see [placeholders](usage.md#workflow-placeholders) below.
- Encoding for the request body, either JSON or form data
- The request headers as key-value pairs
- The URL to send the request to
- The request body as text or as key-value pairs, which can include placeholders, see [placeholders](usage.md#workflow-placeholders) below.
- Encoding for the request body, either JSON or form data
- The request headers as key-value pairs
For security reasons, webhooks can be limited to specific ports and disallowed from connecting to local URLs. See the relevant
[configuration settings](configuration.md#workflow-webhooks) to change this behavior. If you are allowing non-admins to create workflows,
@@ -605,33 +604,33 @@ The available inputs differ depending on the type of workflow trigger.
This is because at the time of consumption (when the text is to be set), no automatic tags etc. have been
applied. You can use the following placeholders in the template with any trigger type:
- `{{correspondent}}`: assigned correspondent name
- `{{document_type}}`: assigned document type name
- `{{owner_username}}`: assigned owner username
- `{{added}}`: added datetime
- `{{added_year}}`: added year
- `{{added_year_short}}`: added year
- `{{added_month}}`: added month
- `{{added_month_name}}`: added month name
- `{{added_month_name_short}}`: added month short name
- `{{added_day}}`: added day
- `{{added_time}}`: added time in HH:MM format
- `{{original_filename}}`: original file name without extension
- `{{filename}}`: current file name without extension (for "added" workflows this may not be final yet, you can use `{{original_filename}}`)
- `{{doc_title}}`: current document title (cannot be used in title assignment)
- `{{correspondent}}`: assigned correspondent name
- `{{document_type}}`: assigned document type name
- `{{owner_username}}`: assigned owner username
- `{{added}}`: added datetime
- `{{added_year}}`: added year
- `{{added_year_short}}`: added year
- `{{added_month}}`: added month
- `{{added_month_name}}`: added month name
- `{{added_month_name_short}}`: added month short name
- `{{added_day}}`: added day
- `{{added_time}}`: added time in HH:MM format
- `{{original_filename}}`: original file name without extension
- `{{filename}}`: current file name without extension (for "added" workflows this may not be final yet, you can use `{{original_filename}}`)
- `{{doc_title}}`: current document title (cannot be used in title assignment)
The following placeholders are only available for "added" or "updated" triggers
- `{{created}}`: created datetime
- `{{created_year}}`: created year
- `{{created_year_short}}`: created year
- `{{created_month}}`: created month
- `{{created_month_name}}`: created month name
- `{{created_month_name_short}}`: created month short name
- `{{created_day}}`: created day
- `{{created_time}}`: created time in HH:MM format
- `{{doc_url}}`: URL to the document in the web UI. Requires the `PAPERLESS_URL` setting to be set.
- `{{doc_id}}`: Document ID
- `{{created}}`: created datetime
- `{{created_year}}`: created year
- `{{created_year_short}}`: created year
- `{{created_month}}`: created month
- `{{created_month_name}}`: created month name
- `{{created_month_name_short}}`: created month short name
- `{{created_day}}`: created day
- `{{created_time}}`: created time in HH:MM format
- `{{doc_url}}`: URL to the document in the web UI. Requires the `PAPERLESS_URL` setting to be set.
- `{{doc_id}}`: Document ID
##### Examples
@@ -676,26 +675,26 @@ Multiple fields may be attached to a document but the same field name cannot be
The following custom field types are supported:
- `Text`: any text
- `Boolean`: true / false (check / unchecked) field
- `Date`: date
- `URL`: a valid url
- `Integer`: integer number e.g. 12
- `Number`: float number e.g. 12.3456
- `Monetary`: [ISO 4217 currency code](https://en.wikipedia.org/wiki/ISO_4217#List_of_ISO_4217_currency_codes) and a number with exactly two decimals, e.g. USD12.30
- `Document Link`: reference(s) to other document(s) displayed as links, automatically creates a symmetrical link in reverse
- `Select`: a pre-defined list of strings from which the user can choose
- `Text`: any text
- `Boolean`: true / false (check / unchecked) field
- `Date`: date
- `URL`: a valid url
- `Integer`: integer number e.g. 12
- `Number`: float number e.g. 12.3456
- `Monetary`: [ISO 4217 currency code](https://en.wikipedia.org/wiki/ISO_4217#List_of_ISO_4217_currency_codes) and a number with exactly two decimals, e.g. USD12.30
- `Document Link`: reference(s) to other document(s) displayed as links, automatically creates a symmetrical link in reverse
- `Select`: a pre-defined list of strings from which the user can choose
## PDF Actions
Paperless-ngx supports basic editing operations for PDFs (these operations currently cannot be performed on non-PDF files). When viewing an individual document you can
open the 'PDF Editor' to use a simple UI for re-arranging, rotating, deleting pages and splitting documents.
- Merging documents: available when selecting multiple documents for 'bulk editing'.
- Rotating documents: available when selecting multiple documents for 'bulk editing' and via the pdf editor on an individual document's details page.
- Splitting documents: via the pdf editor on an individual document's details page.
- Deleting pages: via the pdf editor on an individual document's details page.
- Re-arranging pages: via the pdf editor on an individual document's details page.
- Merging documents: available when selecting multiple documents for 'bulk editing'.
- Rotating documents: available when selecting multiple documents for 'bulk editing' and via the pdf editor on an individual document's details page.
- Splitting documents: via the pdf editor on an individual document's details page.
- Deleting pages: via the pdf editor on an individual document's details page.
- Re-arranging pages: via the pdf editor on an individual document's details page.
!!! important
@@ -773,18 +772,18 @@ the system.
Here are a couple examples of tags and types that you could use in your
collection.
- An `inbox` tag for newly added documents that you haven't manually
edited yet.
- A tag `car` for everything car related (repairs, registration,
insurance, etc)
- A tag `todo` for documents that you still need to do something with,
such as reply, or perform some task online.
- A tag `bank account x` for all bank statement related to that
account.
- A tag `mail` for anything that you added to paperless via its mail
processing capabilities.
- A tag `missing_metadata` when you still need to add some metadata to
a document, but can't or don't want to do this right now.
- An `inbox` tag for newly added documents that you haven't manually
edited yet.
- A tag `car` for everything car related (repairs, registration,
insurance, etc)
- A tag `todo` for documents that you still need to do something with,
such as reply, or perform some task online.
- A tag `bank account x` for all bank statement related to that
account.
- A tag `mail` for anything that you added to paperless via its mail
processing capabilities.
- A tag `missing_metadata` when you still need to add some metadata to
a document, but can't or don't want to do this right now.
## Searching {#basic-usage_searching}
@@ -873,8 +872,8 @@ The following diagram shows how easy it is to manage your documents.
### Preparations in paperless
- Create an inbox tag that gets assigned to all new documents.
- Create a TODO tag.
- Create an inbox tag that gets assigned to all new documents.
- Create a TODO tag.
### Processing of the physical documents
@@ -948,15 +947,15 @@ Some documents require attention and require you to act on the document.
You may take two different approaches to handle these documents based on
how regularly you intend to scan documents and use paperless.
- If you scan and process your documents in paperless regularly,
assign a TODO tag to all scanned documents that you need to process.
Create a saved view on the dashboard that shows all documents with
this tag.
- If you do not scan documents regularly and use paperless solely for
archiving, create a physical todo box next to your physical inbox
and put documents you need to process in the TODO box. When you
performed the task associated with the document, move it to the
inbox.
- If you scan and process your documents in paperless regularly,
assign a TODO tag to all scanned documents that you need to process.
Create a saved view on the dashboard that shows all documents with
this tag.
- If you do not scan documents regularly and use paperless solely for
archiving, create a physical todo box next to your physical inbox
and put documents you need to process in the TODO box. When you
performed the task associated with the document, move it to the
inbox.
## Remote OCR
@@ -977,64 +976,63 @@ or page limitations (e.g. with a free tier).
Paperless-ngx consists of the following components:
- **The webserver:** This serves the administration pages, the API,
and the new frontend. This is the main tool you'll be using to interact
with paperless. You may start the webserver directly with
- **The webserver:** This serves the administration pages, the API,
and the new frontend. This is the main tool you'll be using to interact
with paperless. You may start the webserver directly with
```shell-session
cd /path/to/paperless/src/
granian --interface asginl --ws "paperless.asgi:application"
```
```shell-session
cd /path/to/paperless/src/
granian --interface asginl --ws "paperless.asgi:application"
```
or by any other means such as Apache `mod_wsgi`.
or by any other means such as Apache `mod_wsgi`.
- **The consumer:** This is what watches your consumption folder for
documents. However, the consumer itself does not really consume your
documents. Now it notifies a task processor that a new file is ready
for consumption. I suppose it should be named differently. This was
also used to check your emails, but that's now done elsewhere as
well.
- **The consumer:** This is what watches your consumption folder for
documents. However, the consumer itself does not really consume your
documents. Now it notifies a task processor that a new file is ready
for consumption. I suppose it should be named differently. This was
also used to check your emails, but that's now done elsewhere as
well.
Start the consumer with the management command `document_consumer`:
Start the consumer with the management command `document_consumer`:
```shell-session
cd /path/to/paperless/src/
python3 manage.py document_consumer
```
```shell-session
cd /path/to/paperless/src/
python3 manage.py document_consumer
```
- **The task processor:** Paperless relies on [Celery - Distributed
Task Queue](https://docs.celeryq.dev/en/stable/index.html) for doing
most of the heavy lifting. This is a task queue that accepts tasks
from multiple sources and processes these in parallel. It also comes
with a scheduler that executes certain commands periodically.
- **The task processor:** Paperless relies on [Celery - Distributed
Task Queue](https://docs.celeryq.dev/en/stable/index.html) for doing
most of the heavy lifting. This is a task queue that accepts tasks
from multiple sources and processes these in parallel. It also comes
with a scheduler that executes certain commands periodically.
This task processor is responsible for:
This task processor is responsible for:
- Consuming documents. When the consumer finds new documents, it
notifies the task processor to start a consumption task.
- The task processor also performs the consumption of any
documents you upload through the web interface.
- Consuming emails. It periodically checks your configured
accounts for new emails and notifies the task processor to
consume the attachment of an email.
- Maintaining the search index and the automatic matching
algorithm. These are things that paperless needs to do from time
to time in order to operate properly.
- Consuming documents. When the consumer finds new documents, it
notifies the task processor to start a consumption task.
- The task processor also performs the consumption of any
documents you upload through the web interface.
- Consuming emails. It periodically checks your configured
accounts for new emails and notifies the task processor to
consume the attachment of an email.
- Maintaining the search index and the automatic matching
algorithm. These are things that paperless needs to do from time
to time in order to operate properly.
This allows paperless to process multiple documents from your
consumption folder in parallel! On a modern multi core system, this
makes the consumption process with full OCR blazingly fast.
This allows paperless to process multiple documents from your
consumption folder in parallel! On a modern multi core system, this
makes the consumption process with full OCR blazingly fast.
The task processor comes with a built-in admin interface that you
can use to check whenever any of the tasks fail and inspect the
errors (i.e., wrong email credentials, errors during consuming a
specific file, etc).
The task processor comes with a built-in admin interface that you
can use to check whenever any of the tasks fail and inspect the
errors (i.e., wrong email credentials, errors during consuming a
specific file, etc).
- A [redis](https://redis.io/) message broker: This is a really
lightweight service that is responsible for getting the tasks from
the webserver and the consumer to the task scheduler. These run in a
different process (maybe even on different machines!), and
therefore, this is necessary.
- A [redis](https://redis.io/) message broker: This is a really
lightweight service that is responsible for getting the tasks from
the webserver and the consumer to the task scheduler. These run in a
different process (maybe even on different machines!), and
therefore, this is necessary.
- Optional: A database server. Paperless supports PostgreSQL, MariaDB
and SQLite for storing its data.
- Optional: A database server. Paperless supports PostgreSQL, MariaDB
and SQLite for storing its data.

View File

@@ -45,7 +45,7 @@ dependencies = [
"drf-spectacular-sidecar~=2026.1.1",
"drf-writable-nested~=0.7.1",
"faiss-cpu>=1.10",
"filelock~=3.24.3",
"filelock~=3.20.3",
"flower~=2.0.1",
"gotenberg-client~=0.13.1",
"httpx-oauth~=0.16",

View File

@@ -1217,7 +1217,7 @@
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1760</context>
<context context-type="linenumber">1758</context>
</context-group>
</trans-unit>
<trans-unit id="1577733187050997705" datatype="html">
@@ -2802,19 +2802,19 @@
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1761</context>
<context context-type="linenumber">1759</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">802</context>
<context context-type="linenumber">833</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">835</context>
<context context-type="linenumber">871</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">854</context>
<context context-type="linenumber">894</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/manage/document-attributes/custom-fields/custom-fields.component.ts</context>
@@ -3404,27 +3404,27 @@
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">445</context>
<context context-type="linenumber">470</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">485</context>
<context context-type="linenumber">510</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">523</context>
<context context-type="linenumber">548</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">561</context>
<context context-type="linenumber">586</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">623</context>
<context context-type="linenumber">648</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">756</context>
<context context-type="linenumber">781</context>
</context-group>
</trans-unit>
<trans-unit id="994016933065248559" datatype="html">
@@ -3512,7 +3512,7 @@
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1814</context>
<context context-type="linenumber">1812</context>
</context-group>
</trans-unit>
<trans-unit id="6661109599266152398" datatype="html">
@@ -3523,7 +3523,7 @@
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1815</context>
<context context-type="linenumber">1813</context>
</context-group>
</trans-unit>
<trans-unit id="5162686434580248853" datatype="html">
@@ -3534,7 +3534,7 @@
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1816</context>
<context context-type="linenumber">1814</context>
</context-group>
</trans-unit>
<trans-unit id="8157388568390631653" datatype="html">
@@ -5499,7 +5499,7 @@
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">760</context>
<context context-type="linenumber">785</context>
</context-group>
</trans-unit>
<trans-unit id="4522609911791833187" datatype="html">
@@ -7327,7 +7327,7 @@
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">390</context>
<context context-type="linenumber">415</context>
</context-group>
<note priority="1" from="description">this string is used to separate processing, failed and added on the file upload widget</note>
</trans-unit>
@@ -7851,7 +7851,7 @@
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">758</context>
<context context-type="linenumber">783</context>
</context-group>
</trans-unit>
<trans-unit id="7295637485862454066" datatype="html">
@@ -7869,7 +7869,7 @@
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">798</context>
<context context-type="linenumber">829</context>
</context-group>
</trans-unit>
<trans-unit id="2951161989614003846" datatype="html">
@@ -7890,88 +7890,88 @@
<source>Reprocess operation for &quot;<x id="PH" equiv-text="this.document.title"/>&quot; will begin in the background.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1387</context>
<context context-type="linenumber">1385</context>
</context-group>
</trans-unit>
<trans-unit id="4409560272830824468" datatype="html">
<source>Error executing operation</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1398</context>
<context context-type="linenumber">1396</context>
</context-group>
</trans-unit>
<trans-unit id="6030453331794586802" datatype="html">
<source>Error downloading document</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1461</context>
<context context-type="linenumber">1459</context>
</context-group>
</trans-unit>
<trans-unit id="4458954481601077369" datatype="html">
<source>Page Fit</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1541</context>
<context context-type="linenumber">1539</context>
</context-group>
</trans-unit>
<trans-unit id="4663705961777238777" datatype="html">
<source>PDF edit operation for &quot;<x id="PH" equiv-text="this.document.title"/>&quot; will begin in the background.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1781</context>
<context context-type="linenumber">1779</context>
</context-group>
</trans-unit>
<trans-unit id="9043972994040261999" datatype="html">
<source>Error executing PDF edit operation</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1793</context>
<context context-type="linenumber">1791</context>
</context-group>
</trans-unit>
<trans-unit id="6172690334763056188" datatype="html">
<source>Please enter the current password before attempting to remove it.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1804</context>
<context context-type="linenumber">1802</context>
</context-group>
</trans-unit>
<trans-unit id="968660764814228922" datatype="html">
<source>Password removal operation for &quot;<x id="PH" equiv-text="this.document.title"/>&quot; will begin in the background.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1838</context>
<context context-type="linenumber">1836</context>
</context-group>
</trans-unit>
<trans-unit id="2282118435712883014" datatype="html">
<source>Error executing password removal operation</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1852</context>
<context context-type="linenumber">1850</context>
</context-group>
</trans-unit>
<trans-unit id="3740891324955700797" datatype="html">
<source>Print failed.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1891</context>
<context context-type="linenumber">1889</context>
</context-group>
</trans-unit>
<trans-unit id="6457245677384603573" datatype="html">
<source>Error loading document for printing.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1903</context>
<context context-type="linenumber">1901</context>
</context-group>
</trans-unit>
<trans-unit id="6085793215710522488" datatype="html">
<source>An error occurred loading tiff: <x id="PH" equiv-text="err.toString()"/></source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1968</context>
<context context-type="linenumber">1966</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-detail/document-detail.component.ts</context>
<context context-type="linenumber">1972</context>
<context context-type="linenumber">1970</context>
</context-group>
</trans-unit>
<trans-unit id="4958946940233632319" datatype="html">
@@ -8215,25 +8215,25 @@
<source>Error executing bulk operation</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">294</context>
<context context-type="linenumber">321</context>
</context-group>
</trans-unit>
<trans-unit id="7894972847287473517" datatype="html">
<source>&quot;<x id="PH" equiv-text="items[0].name"/>&quot;</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">382</context>
<context context-type="linenumber">407</context>
</context-group>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">388</context>
<context context-type="linenumber">413</context>
</context-group>
</trans-unit>
<trans-unit id="8639884465898458690" datatype="html">
<source>&quot;<x id="PH" equiv-text="items[0].name"/>&quot; and &quot;<x id="PH_1" equiv-text="items[1].name"/>&quot;</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">384</context>
<context context-type="linenumber">409</context>
</context-group>
<note priority="1" from="description">This is for messages like &apos;modify &quot;tag1&quot; and &quot;tag2&quot;&apos;</note>
</trans-unit>
@@ -8241,7 +8241,7 @@
<source><x id="PH" equiv-text="list"/> and &quot;<x id="PH_1" equiv-text="items[items.length - 1].name"/>&quot;</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">392,394</context>
<context context-type="linenumber">417,419</context>
</context-group>
<note priority="1" from="description">this is for messages like &apos;modify &quot;tag1&quot;, &quot;tag2&quot; and &quot;tag3&quot;&apos;</note>
</trans-unit>
@@ -8249,14 +8249,14 @@
<source>Confirm tags assignment</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">409</context>
<context context-type="linenumber">434</context>
</context-group>
</trans-unit>
<trans-unit id="6619516195038467207" datatype="html">
<source>This operation will add the tag &quot;<x id="PH" equiv-text="tag.name"/>&quot; to <x id="PH_1" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">415</context>
<context context-type="linenumber">440</context>
</context-group>
</trans-unit>
<trans-unit id="1894412783609570695" datatype="html">
@@ -8265,14 +8265,14 @@
)"/> to <x id="PH_1" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">420,422</context>
<context context-type="linenumber">445,447</context>
</context-group>
</trans-unit>
<trans-unit id="7181166515756808573" datatype="html">
<source>This operation will remove the tag &quot;<x id="PH" equiv-text="tag.name"/>&quot; from <x id="PH_1" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">428</context>
<context context-type="linenumber">453</context>
</context-group>
</trans-unit>
<trans-unit id="3819792277998068944" datatype="html">
@@ -8281,7 +8281,7 @@
)"/> from <x id="PH_1" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">433,435</context>
<context context-type="linenumber">458,460</context>
</context-group>
</trans-unit>
<trans-unit id="2739066218579571288" datatype="html">
@@ -8292,84 +8292,84 @@
)"/> on <x id="PH_2" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">437,441</context>
<context context-type="linenumber">462,466</context>
</context-group>
</trans-unit>
<trans-unit id="2996713129519325161" datatype="html">
<source>Confirm correspondent assignment</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">478</context>
<context context-type="linenumber">503</context>
</context-group>
</trans-unit>
<trans-unit id="6900893559485781849" datatype="html">
<source>This operation will assign the correspondent &quot;<x id="PH" equiv-text="correspondent.name"/>&quot; to <x id="PH_1" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">480</context>
<context context-type="linenumber">505</context>
</context-group>
</trans-unit>
<trans-unit id="1257522660364398440" datatype="html">
<source>This operation will remove the correspondent from <x id="PH" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">482</context>
<context context-type="linenumber">507</context>
</context-group>
</trans-unit>
<trans-unit id="5393409374423140648" datatype="html">
<source>Confirm document type assignment</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">516</context>
<context context-type="linenumber">541</context>
</context-group>
</trans-unit>
<trans-unit id="332180123895325027" datatype="html">
<source>This operation will assign the document type &quot;<x id="PH" equiv-text="documentType.name"/>&quot; to <x id="PH_1" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">518</context>
<context context-type="linenumber">543</context>
</context-group>
</trans-unit>
<trans-unit id="2236642492594872779" datatype="html">
<source>This operation will remove the document type from <x id="PH" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">520</context>
<context context-type="linenumber">545</context>
</context-group>
</trans-unit>
<trans-unit id="6386555513013840736" datatype="html">
<source>Confirm storage path assignment</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">554</context>
<context context-type="linenumber">579</context>
</context-group>
</trans-unit>
<trans-unit id="8750527458618415924" datatype="html">
<source>This operation will assign the storage path &quot;<x id="PH" equiv-text="storagePath.name"/>&quot; to <x id="PH_1" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">556</context>
<context context-type="linenumber">581</context>
</context-group>
</trans-unit>
<trans-unit id="60728365335056946" datatype="html">
<source>This operation will remove the storage path from <x id="PH" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">558</context>
<context context-type="linenumber">583</context>
</context-group>
</trans-unit>
<trans-unit id="4187352575310415704" datatype="html">
<source>Confirm custom field assignment</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">587</context>
<context context-type="linenumber">612</context>
</context-group>
</trans-unit>
<trans-unit id="7966494636326273856" datatype="html">
<source>This operation will assign the custom field &quot;<x id="PH" equiv-text="customField.name"/>&quot; to <x id="PH_1" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">593</context>
<context context-type="linenumber">618</context>
</context-group>
</trans-unit>
<trans-unit id="5789455969634598553" datatype="html">
@@ -8378,14 +8378,14 @@
)"/> to <x id="PH_1" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">598,600</context>
<context context-type="linenumber">623,625</context>
</context-group>
</trans-unit>
<trans-unit id="5648572354333199245" datatype="html">
<source>This operation will remove the custom field &quot;<x id="PH" equiv-text="customField.name"/>&quot; from <x id="PH_1" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">606</context>
<context context-type="linenumber">631</context>
</context-group>
</trans-unit>
<trans-unit id="6666899594015948817" datatype="html">
@@ -8394,7 +8394,7 @@
)"/> from <x id="PH_1" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">611,613</context>
<context context-type="linenumber">636,638</context>
</context-group>
</trans-unit>
<trans-unit id="8050047262594964176" datatype="html">
@@ -8405,91 +8405,91 @@
)"/> on <x id="PH_2" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">615,619</context>
<context context-type="linenumber">640,644</context>
</context-group>
</trans-unit>
<trans-unit id="8615059324209654051" datatype="html">
<source>Move <x id="PH" equiv-text="this.list.selected.size"/> selected document(s) to the trash?</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">757</context>
<context context-type="linenumber">782</context>
</context-group>
</trans-unit>
<trans-unit id="8585195717323764335" datatype="html">
<source>This operation will permanently recreate the archive files for <x id="PH" equiv-text="this.list.selected.size"/> selected document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">799</context>
<context context-type="linenumber">830</context>
</context-group>
</trans-unit>
<trans-unit id="7366623494074776040" datatype="html">
<source>The archive files will be re-generated with the current settings.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">800</context>
<context context-type="linenumber">831</context>
</context-group>
</trans-unit>
<trans-unit id="6555329262222566158" datatype="html">
<source>Rotate confirm</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">832</context>
<context context-type="linenumber">868</context>
</context-group>
</trans-unit>
<trans-unit id="5203024009814367559" datatype="html">
<source>This operation will add rotated versions of the <x id="PH" equiv-text="this.list.selected.size"/> document(s).</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">833</context>
<context context-type="linenumber">869</context>
</context-group>
</trans-unit>
<trans-unit id="7910756456450124185" datatype="html">
<source>Merge confirm</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">852</context>
<context context-type="linenumber">892</context>
</context-group>
</trans-unit>
<trans-unit id="7643543647233874431" datatype="html">
<source>This operation will merge <x id="PH" equiv-text="this.list.selected.size"/> selected documents into a new document.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">853</context>
<context context-type="linenumber">893</context>
</context-group>
</trans-unit>
<trans-unit id="7869008840945899895" datatype="html">
<source>Merged document will be queued for consumption.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">872</context>
<context context-type="linenumber">916</context>
</context-group>
</trans-unit>
<trans-unit id="476913782630693351" datatype="html">
<source>Custom fields updated.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">896</context>
<context context-type="linenumber">940</context>
</context-group>
</trans-unit>
<trans-unit id="3873496751167944011" datatype="html">
<source>Error updating custom fields.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">905</context>
<context context-type="linenumber">949</context>
</context-group>
</trans-unit>
<trans-unit id="6144801143088984138" datatype="html">
<source>Share link bundle creation requested.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">945</context>
<context context-type="linenumber">989</context>
</context-group>
</trans-unit>
<trans-unit id="46019676931295023" datatype="html">
<source>Share link bundle creation is not available yet.</source>
<context-group purpose="location">
<context context-type="sourcefile">src/app/components/document-list/bulk-editor/bulk-editor.component.ts</context>
<context context-type="linenumber">952</context>
<context context-type="linenumber">996</context>
</context-group>
</trans-unit>
<trans-unit id="6307402210351946694" datatype="html">

View File

@@ -31,8 +31,8 @@ export enum EditDialogMode {
@Directive()
export abstract class EditDialogComponent<
T extends ObjectWithPermissions | ObjectWithId,
>
T extends ObjectWithPermissions | ObjectWithId,
>
extends LoadingComponentWithPermissions
implements OnInit
{

View File

@@ -950,8 +950,8 @@ describe('DocumentDetailComponent', () => {
it('should support reprocess, confirm and close modal after started', () => {
initNormally()
const bulkEditSpy = jest.spyOn(documentService, 'bulkEdit')
bulkEditSpy.mockReturnValue(of(true))
const reprocessSpy = jest.spyOn(documentService, 'reprocessDocuments')
reprocessSpy.mockReturnValue(of(true))
let openModal: NgbModalRef
modalService.activeInstances.subscribe((modal) => (openModal = modal[0]))
const modalSpy = jest.spyOn(modalService, 'open')
@@ -959,7 +959,7 @@ describe('DocumentDetailComponent', () => {
component.reprocess()
const modalCloseSpy = jest.spyOn(openModal, 'close')
openModal.componentInstance.confirmClicked.next()
expect(bulkEditSpy).toHaveBeenCalledWith([doc.id], 'reprocess', {})
expect(reprocessSpy).toHaveBeenCalledWith([doc.id])
expect(modalSpy).toHaveBeenCalled()
expect(toastSpy).toHaveBeenCalled()
expect(modalCloseSpy).toHaveBeenCalled()
@@ -967,13 +967,13 @@ describe('DocumentDetailComponent', () => {
it('should show error if redo ocr call fails', () => {
initNormally()
const bulkEditSpy = jest.spyOn(documentService, 'bulkEdit')
const reprocessSpy = jest.spyOn(documentService, 'reprocessDocuments')
let openModal: NgbModalRef
modalService.activeInstances.subscribe((modal) => (openModal = modal[0]))
const toastSpy = jest.spyOn(toastService, 'showError')
component.reprocess()
const modalCloseSpy = jest.spyOn(openModal, 'close')
bulkEditSpy.mockReturnValue(throwError(() => new Error('error occurred')))
reprocessSpy.mockReturnValue(throwError(() => new Error('error occurred')))
openModal.componentInstance.confirmClicked.next()
expect(toastSpy).toHaveBeenCalled()
expect(modalCloseSpy).not.toHaveBeenCalled()
@@ -1644,9 +1644,9 @@ describe('DocumentDetailComponent', () => {
expect(
fixture.debugElement.query(By.css('.preview-sticky img'))
).not.toBeUndefined()
;(component.document.mime_type =
;((component.document.mime_type =
'application/vnd.openxmlformats-officedocument.wordprocessingml.document'),
fixture.detectChanges()
fixture.detectChanges())
expect(component.archiveContentRenderType).toEqual(
component.ContentRenderType.Other
)
@@ -1669,18 +1669,15 @@ describe('DocumentDetailComponent', () => {
modal.componentInstance.pages = [{ page: 1, rotate: 0, splitAfter: false }]
modal.componentInstance.confirm()
let req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/edit_pdf/`
)
expect(req.request.body).toEqual({
documents: [10],
method: 'edit_pdf',
parameters: {
operations: [{ page: 1, rotate: 0, doc: 0 }],
delete_original: false,
update_document: false,
include_metadata: true,
source_mode: 'explicit_selection',
},
operations: [{ page: 1, rotate: 0, doc: 0 }],
delete_original: false,
update_document: false,
include_metadata: true,
source_mode: 'explicit_selection',
})
req.error(new ErrorEvent('failed'))
expect(errorSpy).toHaveBeenCalled()
@@ -1691,7 +1688,7 @@ describe('DocumentDetailComponent', () => {
modal.componentInstance.deleteOriginal = true
modal.componentInstance.confirm()
req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/edit_pdf/`
)
req.flush(true)
expect(closeSpy).toHaveBeenCalled()
@@ -1711,18 +1708,15 @@ describe('DocumentDetailComponent', () => {
dialog.deleteOriginal = true
dialog.confirm()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/remove_password/`
)
expect(req.request.body).toEqual({
documents: [10],
method: 'remove_password',
parameters: {
password: 'secret',
update_document: false,
include_metadata: false,
delete_original: true,
source_mode: 'explicit_selection',
},
password: 'secret',
update_document: false,
include_metadata: false,
delete_original: true,
source_mode: 'explicit_selection',
})
req.flush(true)
})
@@ -1737,7 +1731,7 @@ describe('DocumentDetailComponent', () => {
expect(errorSpy).toHaveBeenCalled()
httpTestingController.expectNone(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/remove_password/`
)
})
@@ -1753,7 +1747,7 @@ describe('DocumentDetailComponent', () => {
modal.componentInstance as PasswordRemovalConfirmDialogComponent
dialog.confirm()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/remove_password/`
)
req.error(new ErrorEvent('failed'))
@@ -1774,7 +1768,7 @@ describe('DocumentDetailComponent', () => {
modal.componentInstance as PasswordRemovalConfirmDialogComponent
dialog.confirm()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/remove_password/`
)
req.flush(true)

View File

@@ -1379,27 +1379,25 @@ export class DocumentDetailComponent
modal.componentInstance.btnCaption = $localize`Proceed`
modal.componentInstance.confirmClicked.subscribe(() => {
modal.componentInstance.buttonsEnabled = false
this.documentsService
.bulkEdit([this.document.id], 'reprocess', {})
.subscribe({
next: () => {
this.toastService.showInfo(
$localize`Reprocess operation for "${this.document.title}" will begin in the background.`
)
if (modal) {
modal.close()
}
},
error: (error) => {
if (modal) {
modal.componentInstance.buttonsEnabled = true
}
this.toastService.showError(
$localize`Error executing operation`,
error
)
},
})
this.documentsService.reprocessDocuments([this.document.id]).subscribe({
next: () => {
this.toastService.showInfo(
$localize`Reprocess operation for "${this.document.title}" will begin in the background.`
)
if (modal) {
modal.close()
}
},
error: (error) => {
if (modal) {
modal.componentInstance.buttonsEnabled = true
}
this.toastService.showError(
$localize`Error executing operation`,
error
)
},
})
})
}
@@ -1766,7 +1764,7 @@ export class DocumentDetailComponent
.subscribe(() => {
modal.componentInstance.buttonsEnabled = false
this.documentsService
.bulkEdit([sourceDocumentId], 'edit_pdf', {
.editPdfDocuments([sourceDocumentId], {
operations: modal.componentInstance.getOperations(),
delete_original: modal.componentInstance.deleteOriginal,
update_document:
@@ -1824,7 +1822,7 @@ export class DocumentDetailComponent
dialog.buttonsEnabled = false
this.networkActive = true
this.documentsService
.bulkEdit([sourceDocumentId], 'remove_password', {
.removePasswordDocuments([sourceDocumentId], {
password: this.password,
update_document: dialog.updateDocument,
include_metadata: dialog.includeMetadata,

View File

@@ -1,3 +1,4 @@
import { DatePipe } from '@angular/common'
import { provideHttpClient, withInterceptorsFromDi } from '@angular/common/http'
import {
HttpTestingController,
@@ -138,6 +139,7 @@ describe('BulkEditorComponent', () => {
},
},
FilterPipe,
DatePipe,
SettingsService,
{
provide: UserService,
@@ -849,13 +851,11 @@ describe('BulkEditorComponent', () => {
expect(modal).not.toBeUndefined()
modal.componentInstance.confirm()
let req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/delete/`
)
req.flush(true)
expect(req.request.body).toEqual({
documents: [3, 4],
method: 'delete',
parameters: {},
})
httpTestingController.match(
`${environment.apiBaseUrl}documents/?page=1&page_size=50&ordering=-created&truncate_content=true`
@@ -868,7 +868,7 @@ describe('BulkEditorComponent', () => {
fixture.detectChanges()
component.applyDelete()
req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/delete/`
)
})
@@ -944,13 +944,11 @@ describe('BulkEditorComponent', () => {
expect(modal).not.toBeUndefined()
modal.componentInstance.confirm()
let req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/reprocess/`
)
req.flush(true)
expect(req.request.body).toEqual({
documents: [3, 4],
method: 'reprocess',
parameters: {},
})
httpTestingController.match(
`${environment.apiBaseUrl}documents/?page=1&page_size=50&ordering=-created&truncate_content=true`
@@ -979,13 +977,13 @@ describe('BulkEditorComponent', () => {
modal.componentInstance.rotate()
modal.componentInstance.confirm()
let req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/rotate/`
)
req.flush(true)
expect(req.request.body).toEqual({
documents: [3, 4],
method: 'rotate',
parameters: { degrees: 90 },
degrees: 90,
source_mode: 'latest_version',
})
httpTestingController.match(
`${environment.apiBaseUrl}documents/?page=1&page_size=50&ordering=-created&truncate_content=true`
@@ -1021,13 +1019,12 @@ describe('BulkEditorComponent', () => {
modal.componentInstance.metadataDocumentID = 3
modal.componentInstance.confirm()
let req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/merge/`
)
req.flush(true)
expect(req.request.body).toEqual({
documents: [3, 4],
method: 'merge',
parameters: { metadata_document_id: 3 },
metadata_document_id: 3,
})
httpTestingController.match(
`${environment.apiBaseUrl}documents/?page=1&page_size=50&ordering=-created&truncate_content=true`
@@ -1040,13 +1037,13 @@ describe('BulkEditorComponent', () => {
modal.componentInstance.deleteOriginals = true
modal.componentInstance.confirm()
req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/merge/`
)
req.flush(true)
expect(req.request.body).toEqual({
documents: [3, 4],
method: 'merge',
parameters: { metadata_document_id: 3, delete_originals: true },
metadata_document_id: 3,
delete_originals: true,
})
httpTestingController.match(
`${environment.apiBaseUrl}documents/?page=1&page_size=50&ordering=-created&truncate_content=true`
@@ -1061,13 +1058,13 @@ describe('BulkEditorComponent', () => {
modal.componentInstance.archiveFallback = true
modal.componentInstance.confirm()
req = httpTestingController.expectOne(
`${environment.apiBaseUrl}documents/bulk_edit/`
`${environment.apiBaseUrl}documents/merge/`
)
req.flush(true)
expect(req.request.body).toEqual({
documents: [3, 4],
method: 'merge',
parameters: { metadata_document_id: 3, archive_fallback: true },
metadata_document_id: 3,
archive_fallback: true,
})
httpTestingController.match(
`${environment.apiBaseUrl}documents/?page=1&page_size=50&ordering=-created&truncate_content=true`

View File

@@ -12,7 +12,7 @@ import {
} from '@ng-bootstrap/ng-bootstrap'
import { saveAs } from 'file-saver'
import { NgxBootstrapIconsModule } from 'ngx-bootstrap-icons'
import { first, map, Subject, switchMap, takeUntil } from 'rxjs'
import { first, map, Observable, Subject, switchMap, takeUntil } from 'rxjs'
import { ConfirmDialogComponent } from 'src/app/components/common/confirm-dialog/confirm-dialog.component'
import { CustomField } from 'src/app/data/custom-field'
import { MatchingModel } from 'src/app/data/matching-model'
@@ -29,7 +29,9 @@ import { CorrespondentService } from 'src/app/services/rest/correspondent.servic
import { CustomFieldsService } from 'src/app/services/rest/custom-fields.service'
import { DocumentTypeService } from 'src/app/services/rest/document-type.service'
import {
DocumentBulkEditMethod,
DocumentService,
MergeDocumentsRequest,
SelectionDataItem,
} from 'src/app/services/rest/document.service'
import { SavedViewService } from 'src/app/services/rest/saved-view.service'
@@ -255,9 +257,9 @@ export class BulkEditorComponent
this.unsubscribeNotifier.complete()
}
private executeBulkOperation(
private executeBulkEditMethod(
modal: NgbModalRef,
method: string,
method: DocumentBulkEditMethod,
args: any,
overrideDocumentIDs?: number[]
) {
@@ -272,32 +274,55 @@ export class BulkEditorComponent
)
.pipe(first())
.subscribe({
next: () => {
if (args['delete_originals']) {
this.list.selected.clear()
}
this.list.reload()
this.list.reduceSelectionToFilter()
this.list.selected.forEach((id) => {
this.openDocumentService.refreshDocument(id)
})
this.savedViewService.maybeRefreshDocumentCounts()
if (modal) {
modal.close()
}
},
error: (error) => {
if (modal) {
modal.componentInstance.buttonsEnabled = true
}
this.toastService.showError(
$localize`Error executing bulk operation`,
error
)
},
next: () => this.handleOperationSuccess(modal),
error: (error) => this.handleOperationError(modal, error),
})
}
private executeDocumentAction(
modal: NgbModalRef,
request: Observable<any>,
options: { deleteOriginals?: boolean } = {}
) {
if (modal) {
modal.componentInstance.buttonsEnabled = false
}
request.pipe(first()).subscribe({
next: () => {
this.handleOperationSuccess(modal, options.deleteOriginals ?? false)
},
error: (error) => this.handleOperationError(modal, error),
})
}
private handleOperationSuccess(
modal: NgbModalRef,
clearSelection: boolean = false
) {
if (clearSelection) {
this.list.selected.clear()
}
this.list.reload()
this.list.reduceSelectionToFilter()
this.list.selected.forEach((id) => {
this.openDocumentService.refreshDocument(id)
})
this.savedViewService.maybeRefreshDocumentCounts()
if (modal) {
modal.close()
}
}
private handleOperationError(modal: NgbModalRef, error: any) {
if (modal) {
modal.componentInstance.buttonsEnabled = true
}
this.toastService.showError(
$localize`Error executing bulk operation`,
error
)
}
private applySelectionData(
items: SelectionDataItem[],
selectionModel: FilterableDropdownSelectionModel
@@ -446,13 +471,13 @@ export class BulkEditorComponent
modal.componentInstance.confirmClicked
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
this.executeBulkOperation(modal, 'modify_tags', {
this.executeBulkEditMethod(modal, 'modify_tags', {
add_tags: changedTags.itemsToAdd.map((t) => t.id),
remove_tags: changedTags.itemsToRemove.map((t) => t.id),
})
})
} else {
this.executeBulkOperation(null, 'modify_tags', {
this.executeBulkEditMethod(null, 'modify_tags', {
add_tags: changedTags.itemsToAdd.map((t) => t.id),
remove_tags: changedTags.itemsToRemove.map((t) => t.id),
})
@@ -486,12 +511,12 @@ export class BulkEditorComponent
modal.componentInstance.confirmClicked
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
this.executeBulkOperation(modal, 'set_correspondent', {
this.executeBulkEditMethod(modal, 'set_correspondent', {
correspondent: correspondent ? correspondent.id : null,
})
})
} else {
this.executeBulkOperation(null, 'set_correspondent', {
this.executeBulkEditMethod(null, 'set_correspondent', {
correspondent: correspondent ? correspondent.id : null,
})
}
@@ -524,12 +549,12 @@ export class BulkEditorComponent
modal.componentInstance.confirmClicked
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
this.executeBulkOperation(modal, 'set_document_type', {
this.executeBulkEditMethod(modal, 'set_document_type', {
document_type: documentType ? documentType.id : null,
})
})
} else {
this.executeBulkOperation(null, 'set_document_type', {
this.executeBulkEditMethod(null, 'set_document_type', {
document_type: documentType ? documentType.id : null,
})
}
@@ -562,12 +587,12 @@ export class BulkEditorComponent
modal.componentInstance.confirmClicked
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
this.executeBulkOperation(modal, 'set_storage_path', {
this.executeBulkEditMethod(modal, 'set_storage_path', {
storage_path: storagePath ? storagePath.id : null,
})
})
} else {
this.executeBulkOperation(null, 'set_storage_path', {
this.executeBulkEditMethod(null, 'set_storage_path', {
storage_path: storagePath ? storagePath.id : null,
})
}
@@ -624,7 +649,7 @@ export class BulkEditorComponent
modal.componentInstance.confirmClicked
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
this.executeBulkOperation(modal, 'modify_custom_fields', {
this.executeBulkEditMethod(modal, 'modify_custom_fields', {
add_custom_fields: changedCustomFields.itemsToAdd.map((f) => f.id),
remove_custom_fields: changedCustomFields.itemsToRemove.map(
(f) => f.id
@@ -632,7 +657,7 @@ export class BulkEditorComponent
})
})
} else {
this.executeBulkOperation(null, 'modify_custom_fields', {
this.executeBulkEditMethod(null, 'modify_custom_fields', {
add_custom_fields: changedCustomFields.itemsToAdd.map((f) => f.id),
remove_custom_fields: changedCustomFields.itemsToRemove.map(
(f) => f.id
@@ -762,10 +787,16 @@ export class BulkEditorComponent
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
modal.componentInstance.buttonsEnabled = false
this.executeBulkOperation(modal, 'delete', {})
this.executeDocumentAction(
modal,
this.documentService.deleteDocuments(Array.from(this.list.selected))
)
})
} else {
this.executeBulkOperation(null, 'delete', {})
this.executeDocumentAction(
null,
this.documentService.deleteDocuments(Array.from(this.list.selected))
)
}
}
@@ -804,7 +835,12 @@ export class BulkEditorComponent
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
modal.componentInstance.buttonsEnabled = false
this.executeBulkOperation(modal, 'reprocess', {})
this.executeDocumentAction(
modal,
this.documentService.reprocessDocuments(
Array.from(this.list.selected)
)
)
})
}
@@ -815,7 +851,7 @@ export class BulkEditorComponent
modal.componentInstance.confirmClicked.subscribe(
({ permissions, merge }) => {
modal.componentInstance.buttonsEnabled = false
this.executeBulkOperation(modal, 'set_permissions', {
this.executeBulkEditMethod(modal, 'set_permissions', {
...permissions,
merge,
})
@@ -838,9 +874,13 @@ export class BulkEditorComponent
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
rotateDialog.buttonsEnabled = false
this.executeBulkOperation(modal, 'rotate', {
degrees: rotateDialog.degrees,
})
this.executeDocumentAction(
modal,
this.documentService.rotateDocuments(
Array.from(this.list.selected),
rotateDialog.degrees
)
)
})
}
@@ -856,18 +896,22 @@ export class BulkEditorComponent
mergeDialog.confirmClicked
.pipe(takeUntil(this.unsubscribeNotifier))
.subscribe(() => {
const args = {}
const args: MergeDocumentsRequest = {}
if (mergeDialog.metadataDocumentID > -1) {
args['metadata_document_id'] = mergeDialog.metadataDocumentID
args.metadata_document_id = mergeDialog.metadataDocumentID
}
if (mergeDialog.deleteOriginals) {
args['delete_originals'] = true
args.delete_originals = true
}
if (mergeDialog.archiveFallback) {
args['archive_fallback'] = true
args.archive_fallback = true
}
mergeDialog.buttonsEnabled = false
this.executeBulkOperation(modal, 'merge', args, mergeDialog.documentIDs)
this.executeDocumentAction(
modal,
this.documentService.mergeDocuments(mergeDialog.documentIDs, args),
{ deleteOriginals: !!args.delete_originals }
)
this.toastService.showInfo(
$localize`Merged document will be queued for consumption.`
)

View File

@@ -230,6 +230,88 @@ describe(`DocumentService`, () => {
})
})
it('should call appropriate api endpoint for delete documents', () => {
const ids = [1, 2, 3]
subscription = service.deleteDocuments(ids).subscribe()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}${endpoint}/delete/`
)
expect(req.request.method).toEqual('POST')
expect(req.request.body).toEqual({
documents: ids,
})
})
it('should call appropriate api endpoint for reprocess documents', () => {
const ids = [1, 2, 3]
subscription = service.reprocessDocuments(ids).subscribe()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}${endpoint}/reprocess/`
)
expect(req.request.method).toEqual('POST')
expect(req.request.body).toEqual({
documents: ids,
})
})
it('should call appropriate api endpoint for rotate documents', () => {
const ids = [1, 2, 3]
subscription = service.rotateDocuments(ids, 90).subscribe()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}${endpoint}/rotate/`
)
expect(req.request.method).toEqual('POST')
expect(req.request.body).toEqual({
documents: ids,
degrees: 90,
source_mode: 'latest_version',
})
})
it('should call appropriate api endpoint for merge documents', () => {
const ids = [1, 2, 3]
const args = { metadata_document_id: 1, delete_originals: true }
subscription = service.mergeDocuments(ids, args).subscribe()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}${endpoint}/merge/`
)
expect(req.request.method).toEqual('POST')
expect(req.request.body).toEqual({
documents: ids,
metadata_document_id: 1,
delete_originals: true,
})
})
it('should call appropriate api endpoint for edit pdf', () => {
const ids = [1]
const args = { operations: [{ page: 1, rotate: 90, doc: 0 }] }
subscription = service.editPdfDocuments(ids, args).subscribe()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}${endpoint}/edit_pdf/`
)
expect(req.request.method).toEqual('POST')
expect(req.request.body).toEqual({
documents: ids,
operations: [{ page: 1, rotate: 90, doc: 0 }],
})
})
it('should call appropriate api endpoint for remove password', () => {
const ids = [1]
const args = { password: 'secret', update_document: true }
subscription = service.removePasswordDocuments(ids, args).subscribe()
const req = httpTestingController.expectOne(
`${environment.apiBaseUrl}${endpoint}/remove_password/`
)
expect(req.request.method).toEqual('POST')
expect(req.request.body).toEqual({
documents: ids,
password: 'secret',
update_document: true,
})
})
it('should return the correct preview URL for a single document', () => {
let url = service.getPreviewUrl(documents[0].id)
expect(url).toEqual(

View File

@@ -42,6 +42,45 @@ export enum BulkEditSourceMode {
EXPLICIT_SELECTION = 'explicit_selection',
}
export type DocumentBulkEditMethod =
| 'set_correspondent'
| 'set_document_type'
| 'set_storage_path'
| 'add_tag'
| 'remove_tag'
| 'modify_tags'
| 'modify_custom_fields'
| 'set_permissions'
export interface MergeDocumentsRequest {
metadata_document_id?: number
delete_originals?: boolean
archive_fallback?: boolean
source_mode?: BulkEditSourceMode
}
export interface EditPdfOperation {
page: number
rotate?: number
doc?: number
}
export interface EditPdfDocumentsRequest {
operations: EditPdfOperation[]
delete_original?: boolean
update_document?: boolean
include_metadata?: boolean
source_mode?: BulkEditSourceMode
}
export interface RemovePasswordDocumentsRequest {
password: string
update_document?: boolean
delete_original?: boolean
include_metadata?: boolean
source_mode?: BulkEditSourceMode
}
@Injectable({
providedIn: 'root',
})
@@ -299,7 +338,7 @@ export class DocumentService extends AbstractPaperlessService<Document> {
return this.http.get<DocumentMetadata>(url.toString())
}
bulkEdit(ids: number[], method: string, args: any) {
bulkEdit(ids: number[], method: DocumentBulkEditMethod, args: any) {
return this.http.post(this.getResourceUrl(null, 'bulk_edit'), {
documents: ids,
method: method,
@@ -307,6 +346,54 @@ export class DocumentService extends AbstractPaperlessService<Document> {
})
}
deleteDocuments(ids: number[]) {
return this.http.post(this.getResourceUrl(null, 'delete'), {
documents: ids,
})
}
reprocessDocuments(ids: number[]) {
return this.http.post(this.getResourceUrl(null, 'reprocess'), {
documents: ids,
})
}
rotateDocuments(
ids: number[],
degrees: number,
sourceMode: BulkEditSourceMode = BulkEditSourceMode.LATEST_VERSION
) {
return this.http.post(this.getResourceUrl(null, 'rotate'), {
documents: ids,
degrees,
source_mode: sourceMode,
})
}
mergeDocuments(ids: number[], request: MergeDocumentsRequest = {}) {
return this.http.post(this.getResourceUrl(null, 'merge'), {
documents: ids,
...request,
})
}
editPdfDocuments(ids: number[], request: EditPdfDocumentsRequest) {
return this.http.post(this.getResourceUrl(null, 'edit_pdf'), {
documents: ids,
...request,
})
}
removePasswordDocuments(
ids: number[],
request: RemovePasswordDocumentsRequest
) {
return this.http.post(this.getResourceUrl(null, 'remove_password'), {
documents: ids,
...request,
})
}
getSelectionData(ids: number[]): Observable<SelectionData> {
return this.http.post<SelectionData>(
this.getResourceUrl(null, 'selection_data'),

View File

@@ -62,7 +62,7 @@ export function hslToRgb(h, s, l) {
* @return Array The HSL representation
*/
export function rgbToHsl(r, g, b) {
;(r /= 255), (g /= 255), (b /= 255)
;((r /= 255), (g /= 255), (b /= 255))
var max = Math.max(r, g, b),
min = Math.min(r, g, b)
var h,

View File

@@ -5,7 +5,7 @@
export const environment = {
production: false,
apiBaseUrl: 'http://localhost:8000/api/',
apiVersion: '9',
apiVersion: '10',
appTitle: 'Paperless-ngx',
tag: 'dev',
version: 'DEVELOPMENT',

View File

@@ -51,29 +51,11 @@ from documents.templating.workflows import parse_w_workflow_placeholders
from documents.utils import copy_basic_file_stats
from documents.utils import copy_file_with_basic_stats
from documents.utils import run_subprocess
from paperless.parsers.text import TextDocumentParser
from paperless.parsers.tika import TikaDocumentParser
from paperless_mail.parsers import MailDocumentParser
LOGGING_NAME: Final[str] = "paperless.consumer"
def _parser_cleanup(parser: DocumentParser) -> None:
"""
Call cleanup on a parser, handling the new-style context-manager parsers.
New-style parsers (e.g. TextDocumentParser) use __exit__ for teardown
instead of a cleanup() method. This shim will be removed once all existing parsers
have switched to the new style and this consumer is updated to use it
TODO(stumpylog): Remove me in the future
"""
if isinstance(parser, (TextDocumentParser, TikaDocumentParser)):
parser.__exit__(None, None, None)
else:
parser.cleanup()
class WorkflowTriggerPlugin(
NoCleanupPluginMixin,
NoSetupPluginMixin,
@@ -449,12 +431,6 @@ class ConsumerPlugin(
progress_callback=progress_callback,
)
# New-style parsers use __enter__/__exit__ for resource management.
# _parser_cleanup (below) handles __exit__; call __enter__ here.
# TODO(stumpylog): Remove me in the future
if isinstance(document_parser, (TextDocumentParser, TikaDocumentParser)):
document_parser.__enter__()
self.log.debug(f"Parser: {type(document_parser).__name__}")
# Parse the document. This may take some time.
@@ -483,9 +459,6 @@ class ConsumerPlugin(
self.filename,
self.input_doc.mailrule_id,
)
elif isinstance(document_parser, (TextDocumentParser, TikaDocumentParser)):
# TODO(stumpylog): Remove me in the future
document_parser.parse(self.working_copy, mime_type)
else:
document_parser.parse(self.working_copy, mime_type, self.filename)
@@ -496,15 +469,11 @@ class ConsumerPlugin(
ProgressStatusOptions.WORKING,
ConsumerStatusShortMessage.GENERATING_THUMBNAIL,
)
if isinstance(document_parser, (TextDocumentParser, TikaDocumentParser)):
# TODO(stumpylog): Remove me in the future
thumbnail = document_parser.get_thumbnail(self.working_copy, mime_type)
else:
thumbnail = document_parser.get_thumbnail(
self.working_copy,
mime_type,
self.filename,
)
thumbnail = document_parser.get_thumbnail(
self.working_copy,
mime_type,
self.filename,
)
text = document_parser.get_text()
date = document_parser.get_date()
@@ -521,7 +490,7 @@ class ConsumerPlugin(
page_count = document_parser.get_page_count(self.working_copy, mime_type)
except ParseError as e:
_parser_cleanup(document_parser)
document_parser.cleanup()
if tempdir:
tempdir.cleanup()
self._fail(
@@ -531,7 +500,7 @@ class ConsumerPlugin(
exception=e,
)
except Exception as e:
_parser_cleanup(document_parser)
document_parser.cleanup()
if tempdir:
tempdir.cleanup()
self._fail(
@@ -733,7 +702,7 @@ class ConsumerPlugin(
exception=e,
)
finally:
_parser_cleanup(document_parser)
document_parser.cleanup()
tempdir.cleanup()
self.run_post_consume_script(document)

View File

@@ -30,7 +30,6 @@ def _process_document(doc_id: int) -> None:
)
shutil.move(thumb, document.thumbnail_path)
finally:
# TODO(stumpylog): Cleanup once all parsers are handled
parser.cleanup()

View File

@@ -169,7 +169,7 @@ def match_storage_paths(document: Document, classifier: DocumentClassifier, user
def matches(matching_model: MatchingModel, document: Document):
search_flags = 0
document_content = document.content
document_content = document.get_effective_content() or ""
# Check that match is not empty
if not matching_model.match.strip():

View File

@@ -5,7 +5,7 @@ from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
("documents", "0003_workflowaction_order"),
("documents", "0002_squashed"),
]
operations = [

View File

@@ -1,18 +0,0 @@
# Generated by Django 5.2.9 on 2026-01-20 20:06
from django.db import migrations
from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0002_squashed"),
]
operations = [
migrations.AddField(
model_name="workflowaction",
name="order",
field=models.PositiveIntegerField(default=0, verbose_name="order"),
),
]

View File

@@ -6,7 +6,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0004_remove_document_storage_type"),
("documents", "0003_remove_document_storage_type"),
]
operations = [

View File

@@ -6,7 +6,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0005_workflowtrigger_filter_has_any_correspondents_and_more"),
("documents", "0004_workflowtrigger_filter_has_any_correspondents_and_more"),
]
operations = [

View File

@@ -7,7 +7,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0006_alter_document_checksum_unique"),
("documents", "0005_alter_document_checksum_unique"),
]
operations = [

View File

@@ -46,7 +46,7 @@ def revoke_share_link_bundle_permissions(apps, schema_editor):
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
("documents", "0007_document_content_length"),
("documents", "0006_document_content_length"),
]
operations = [

View File

@@ -6,7 +6,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0008_sharelinkbundle"),
("documents", "0007_sharelinkbundle"),
]
operations = [

View File

@@ -7,7 +7,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0009_workflowaction_passwords_alter_workflowaction_type"),
("documents", "0008_workflowaction_passwords_alter_workflowaction_type"),
]
operations = [

View File

@@ -7,7 +7,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0010_alter_document_content_length"),
("documents", "0009_alter_document_content_length"),
]
operations = [

View File

@@ -6,7 +6,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0011_optimize_integer_field_sizes"),
("documents", "0010_optimize_integer_field_sizes"),
]
operations = [

View File

@@ -7,7 +7,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0012_alter_workflowaction_type"),
("documents", "0011_alter_workflowaction_type"),
]
operations = [

View File

@@ -6,7 +6,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0013_document_root_document"),
("documents", "0012_document_root_document"),
]
operations = [

View File

@@ -124,7 +124,7 @@ def _restore_visibility_fields(apps, schema_editor):
class Migration(migrations.Migration):
dependencies = [
("documents", "0014_alter_paperlesstask_task_name"),
("documents", "0013_alter_paperlesstask_task_name"),
]
operations = [

View File

@@ -7,7 +7,7 @@ from django.db import models
class Migration(migrations.Migration):
dependencies = [
("documents", "0015_savedview_visibility_to_ui_settings"),
("documents", "0014_savedview_visibility_to_ui_settings"),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]

View File

@@ -361,6 +361,42 @@ class Document(SoftDeleteModel, ModelWithOwner): # type: ignore[django-manager-
res += f" {self.title}"
return res
def get_effective_content(self) -> str | None:
"""
Returns the effective content for the document.
For root documents, this is the latest version's content when available.
For version documents, this is always the document's own content.
If the queryset already annotated ``effective_content``, that value is used.
"""
if hasattr(self, "effective_content"):
return getattr(self, "effective_content")
if self.root_document_id is not None or self.pk is None:
return self.content
prefetched_cache = getattr(self, "_prefetched_objects_cache", None)
prefetched_versions = (
prefetched_cache.get("versions")
if isinstance(prefetched_cache, dict)
else None
)
if prefetched_versions:
latest_prefetched = max(prefetched_versions, key=lambda doc: doc.id)
return latest_prefetched.content
latest_version_content = (
Document.objects.filter(root_document=self)
.order_by("-id")
.values_list("content", flat=True)
.first()
)
return (
latest_version_content
if latest_version_content is not None
else self.content
)
@property
def suggestion_content(self):
"""
@@ -373,15 +409,21 @@ class Document(SoftDeleteModel, ModelWithOwner): # type: ignore[django-manager-
This improves processing speed for large documents while keeping
enough context for accurate suggestions.
"""
if not self.content or len(self.content) <= 1200000:
return self.content
effective_content = self.get_effective_content()
if not effective_content or len(effective_content) <= 1200000:
return effective_content
else:
# Use 80% from the start and 20% from the end
# to preserve both opening and closing context.
head_len = 800000
tail_len = 200000
return " ".join((self.content[:head_len], self.content[-tail_len:]))
return " ".join(
(
effective_content[:head_len],
effective_content[-tail_len:],
),
)
@property
def source_path(self) -> Path:

View File

@@ -1540,11 +1540,124 @@ class DocumentListSerializer(serializers.Serializer):
return documents
class SourceModeValidationMixin:
def validate_source_mode(self, source_mode: str) -> str:
if source_mode not in bulk_edit.SourceModeChoices.__dict__.values():
raise serializers.ValidationError("Invalid source_mode")
return source_mode
class RotateDocumentsSerializer(DocumentListSerializer, SourceModeValidationMixin):
degrees = serializers.IntegerField(required=True)
source_mode = serializers.CharField(
required=False,
default=bulk_edit.SourceModeChoices.LATEST_VERSION,
)
class MergeDocumentsSerializer(DocumentListSerializer, SourceModeValidationMixin):
metadata_document_id = serializers.IntegerField(
required=False,
allow_null=True,
)
delete_originals = serializers.BooleanField(required=False, default=False)
archive_fallback = serializers.BooleanField(required=False, default=False)
source_mode = serializers.CharField(
required=False,
default=bulk_edit.SourceModeChoices.LATEST_VERSION,
)
class EditPdfDocumentsSerializer(DocumentListSerializer, SourceModeValidationMixin):
operations = serializers.ListField(required=True)
delete_original = serializers.BooleanField(required=False, default=False)
update_document = serializers.BooleanField(required=False, default=False)
include_metadata = serializers.BooleanField(required=False, default=True)
source_mode = serializers.CharField(
required=False,
default=bulk_edit.SourceModeChoices.LATEST_VERSION,
)
def validate(self, attrs):
documents = attrs["documents"]
if len(documents) > 1:
raise serializers.ValidationError(
"Edit PDF method only supports one document",
)
operations = attrs["operations"]
if not isinstance(operations, list):
raise serializers.ValidationError("operations must be a list")
for op in operations:
if not isinstance(op, dict):
raise serializers.ValidationError("invalid operation entry")
if "page" not in op or not isinstance(op["page"], int):
raise serializers.ValidationError("page must be an integer")
if "rotate" in op and not isinstance(op["rotate"], int):
raise serializers.ValidationError("rotate must be an integer")
if "doc" in op and not isinstance(op["doc"], int):
raise serializers.ValidationError("doc must be an integer")
if attrs["update_document"]:
max_idx = max(op.get("doc", 0) for op in operations)
if max_idx > 0:
raise serializers.ValidationError(
"update_document only allowed with a single output document",
)
doc = Document.objects.get(id=documents[0])
if doc.page_count:
for op in operations:
if op["page"] < 1 or op["page"] > doc.page_count:
raise serializers.ValidationError(
f"Page {op['page']} is out of bounds for document with {doc.page_count} pages.",
)
return attrs
class RemovePasswordDocumentsSerializer(
DocumentListSerializer,
SourceModeValidationMixin,
):
password = serializers.CharField(required=True)
update_document = serializers.BooleanField(required=False, default=False)
delete_original = serializers.BooleanField(required=False, default=False)
include_metadata = serializers.BooleanField(required=False, default=True)
source_mode = serializers.CharField(
required=False,
default=bulk_edit.SourceModeChoices.LATEST_VERSION,
)
class DeleteDocumentsSerializer(DocumentListSerializer):
pass
class ReprocessDocumentsSerializer(DocumentListSerializer):
pass
class BulkEditSerializer(
SerializerWithPerms,
DocumentListSerializer,
SetPermissionsMixin,
SourceModeValidationMixin,
):
# TODO: remove this and related backwards compatibility code when API v9 is dropped
# split, delete_pages can be removed entirely
MOVED_DOCUMENT_ACTION_ENDPOINTS = {
"delete": "/api/documents/delete/",
"reprocess": "/api/documents/reprocess/",
"rotate": "/api/documents/rotate/",
"merge": "/api/documents/merge/",
"edit_pdf": "/api/documents/edit_pdf/",
"remove_password": "/api/documents/remove_password/",
"split": "/api/documents/edit_pdf/",
"delete_pages": "/api/documents/edit_pdf/",
}
LEGACY_DOCUMENT_ACTION_METHODS = tuple(MOVED_DOCUMENT_ACTION_ENDPOINTS.keys())
method = serializers.ChoiceField(
choices=[
"set_correspondent",
@@ -1554,15 +1667,8 @@ class BulkEditSerializer(
"remove_tag",
"modify_tags",
"modify_custom_fields",
"delete",
"reprocess",
"set_permissions",
"rotate",
"merge",
"split",
"delete_pages",
"edit_pdf",
"remove_password",
*LEGACY_DOCUMENT_ACTION_METHODS,
],
label="Method",
write_only=True,
@@ -1640,8 +1746,7 @@ class BulkEditSerializer(
return bulk_edit.edit_pdf
elif method == "remove_password":
return bulk_edit.remove_password
else: # pragma: no cover
# This will never happen as it is handled by the ChoiceField
else:
raise serializers.ValidationError("Unsupported method.")
def _validate_parameters_tags(self, parameters) -> None:
@@ -1751,9 +1856,7 @@ class BulkEditSerializer(
"source_mode",
bulk_edit.SourceModeChoices.LATEST_VERSION,
)
if source_mode not in bulk_edit.SourceModeChoices.__dict__.values():
raise serializers.ValidationError("Invalid source_mode")
parameters["source_mode"] = source_mode
parameters["source_mode"] = self.validate_source_mode(source_mode)
def _validate_parameters_split(self, parameters) -> None:
if "pages" not in parameters:

View File

@@ -399,7 +399,6 @@ def update_document_content_maybe_archive_file(document_id) -> None:
f"Error while parsing document {document} (ID: {document_id})",
)
finally:
# TODO(stumpylog): Cleanup once all parsers are handled
parser.cleanup()

View File

@@ -5,6 +5,7 @@ from unittest.mock import patch
from django.contrib.auth.models import User
from django.core.files.uploadedfile import SimpleUploadedFile
from django.test import override_settings
from rest_framework import status
from rest_framework.test import APITestCase
@@ -693,3 +694,17 @@ class TestApiAppConfig(DirectoriesMixin, APITestCase):
content_type="application/json",
)
mock_update.assert_called_once()
@override_settings(LLM_ALLOW_INTERNAL_ENDPOINTS=False)
def test_update_llm_endpoint_blocks_internal_endpoint_when_disallowed(self) -> None:
response = self.client.patch(
f"{self.ENDPOINT}1/",
json.dumps(
{
"llm_endpoint": "http://127.0.0.1:11434",
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn("non-public address", str(response.data).lower())

View File

@@ -422,6 +422,34 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
self.assertEqual(args[0], [self.doc1.id])
self.assertEqual(len(kwargs), 0)
@mock.patch("documents.views.bulk_edit.delete")
def test_delete_documents_endpoint(self, m) -> None:
self.setup_mock(m, "delete")
response = self.client.post(
"/api/documents/delete/",
json.dumps({"documents": [self.doc1.id]}),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
m.assert_called_once()
args, kwargs = m.call_args
self.assertEqual(args[0], [self.doc1.id])
self.assertEqual(len(kwargs), 0)
@mock.patch("documents.views.bulk_edit.reprocess")
def test_reprocess_documents_endpoint(self, m) -> None:
self.setup_mock(m, "reprocess")
response = self.client.post(
"/api/documents/reprocess/",
json.dumps({"documents": [self.doc1.id]}),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
m.assert_called_once()
args, kwargs = m.call_args
self.assertEqual(args[0], [self.doc1.id])
self.assertEqual(len(kwargs), 0)
@mock.patch("documents.serialisers.bulk_edit.set_storage_path")
def test_api_set_storage_path(self, m) -> None:
"""
@@ -877,7 +905,7 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
self.assertEqual(kwargs["merge"], True)
@mock.patch("documents.serialisers.bulk_edit.set_storage_path")
@mock.patch("documents.serialisers.bulk_edit.merge")
@mock.patch("documents.views.bulk_edit.merge")
def test_insufficient_global_perms(self, mock_merge, mock_set_storage) -> None:
"""
GIVEN:
@@ -912,12 +940,11 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
mock_set_storage.assert_not_called()
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/merge/",
json.dumps(
{
"documents": [self.doc1.id],
"method": "merge",
"parameters": {"metadata_document_id": self.doc1.id},
"metadata_document_id": self.doc1.id,
},
),
content_type="application/json",
@@ -927,15 +954,12 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
mock_merge.assert_not_called()
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/merge/",
json.dumps(
{
"documents": [self.doc1.id],
"method": "merge",
"parameters": {
"metadata_document_id": self.doc1.id,
"delete_originals": True,
},
"metadata_document_id": self.doc1.id,
"delete_originals": True,
},
),
content_type="application/json",
@@ -1052,85 +1076,57 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
m.assert_called_once()
@mock.patch("documents.serialisers.bulk_edit.rotate")
@mock.patch("documents.views.bulk_edit.rotate")
def test_rotate(self, m) -> None:
self.setup_mock(m, "rotate")
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/rotate/",
json.dumps(
{
"documents": [self.doc2.id, self.doc3.id],
"method": "rotate",
"parameters": {"degrees": 90},
"degrees": 90,
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
m.assert_called_once()
args, kwargs = m.call_args
self.assertCountEqual(args[0], [self.doc2.id, self.doc3.id])
self.assertEqual(kwargs["degrees"], 90)
@mock.patch("documents.serialisers.bulk_edit.rotate")
def test_rotate_invalid_params(self, m) -> None:
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id, self.doc3.id],
"method": "rotate",
"parameters": {"degrees": "foo"},
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id, self.doc3.id],
"method": "rotate",
"parameters": {"degrees": 90.5},
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
m.assert_not_called()
@mock.patch("documents.serialisers.bulk_edit.merge")
def test_merge(self, m) -> None:
self.setup_mock(m, "merge")
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id, self.doc3.id],
"method": "merge",
"parameters": {"metadata_document_id": self.doc3.id},
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
m.assert_called_once()
args, kwargs = m.call_args
self.assertCountEqual(args[0], [self.doc2.id, self.doc3.id])
self.assertEqual(kwargs["metadata_document_id"], self.doc3.id)
self.assertEqual(kwargs["source_mode"], "latest_version")
self.assertEqual(kwargs["user"], self.user)
@mock.patch("documents.serialisers.bulk_edit.merge")
def test_merge_and_delete_insufficient_permissions(self, m) -> None:
@mock.patch("documents.views.bulk_edit.rotate")
def test_rotate_invalid_params(self, m) -> None:
response = self.client.post(
"/api/documents/rotate/",
json.dumps(
{
"documents": [self.doc2.id, self.doc3.id],
"degrees": "foo",
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
response = self.client.post(
"/api/documents/rotate/",
json.dumps(
{
"documents": [self.doc2.id, self.doc3.id],
"degrees": 90.5,
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
m.assert_not_called()
@mock.patch("documents.views.bulk_edit.rotate")
def test_rotate_insufficient_permissions(self, m) -> None:
self.doc1.owner = User.objects.get(username="temp_admin")
self.doc1.save()
user1 = User.objects.create(username="user1")
@@ -1138,17 +1134,13 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
user1.save()
self.client.force_authenticate(user=user1)
self.setup_mock(m, "merge")
self.setup_mock(m, "rotate")
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/rotate/",
json.dumps(
{
"documents": [self.doc1.id, self.doc2.id],
"method": "merge",
"parameters": {
"metadata_document_id": self.doc2.id,
"delete_originals": True,
},
"degrees": 90,
},
),
content_type="application/json",
@@ -1159,15 +1151,11 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
self.assertEqual(response.content, b"Insufficient permissions")
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/rotate/",
json.dumps(
{
"documents": [self.doc2.id, self.doc3.id],
"method": "merge",
"parameters": {
"metadata_document_id": self.doc2.id,
"delete_originals": True,
},
"degrees": 90,
},
),
content_type="application/json",
@@ -1176,27 +1164,78 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
self.assertEqual(response.status_code, status.HTTP_200_OK)
m.assert_called_once()
@mock.patch("documents.serialisers.bulk_edit.merge")
def test_merge_invalid_parameters(self, m) -> None:
"""
GIVEN:
- API data for merging documents is called
- The parameters are invalid
WHEN:
- API is called
THEN:
- The API fails with a correct error code
"""
@mock.patch("documents.views.bulk_edit.merge")
def test_merge(self, m) -> None:
self.setup_mock(m, "merge")
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/merge/",
json.dumps(
{
"documents": [self.doc2.id, self.doc3.id],
"metadata_document_id": self.doc3.id,
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
m.assert_called_once()
args, kwargs = m.call_args
self.assertCountEqual(args[0], [self.doc2.id, self.doc3.id])
self.assertEqual(kwargs["metadata_document_id"], self.doc3.id)
self.assertEqual(kwargs["source_mode"], "latest_version")
self.assertEqual(kwargs["user"], self.user)
@mock.patch("documents.views.bulk_edit.merge")
def test_merge_and_delete_insufficient_permissions(self, m) -> None:
self.doc1.owner = User.objects.get(username="temp_admin")
self.doc1.save()
user1 = User.objects.create(username="user1")
user1.user_permissions.add(*Permission.objects.all())
user1.save()
self.client.force_authenticate(user=user1)
self.setup_mock(m, "merge")
response = self.client.post(
"/api/documents/merge/",
json.dumps(
{
"documents": [self.doc1.id, self.doc2.id],
"method": "merge",
"parameters": {
"delete_originals": "not_boolean",
},
"metadata_document_id": self.doc2.id,
"delete_originals": True,
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
m.assert_not_called()
self.assertEqual(response.content, b"Insufficient permissions")
response = self.client.post(
"/api/documents/merge/",
json.dumps(
{
"documents": [self.doc2.id, self.doc3.id],
"metadata_document_id": self.doc2.id,
"delete_originals": True,
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
m.assert_called_once()
@mock.patch("documents.views.bulk_edit.merge")
def test_merge_invalid_parameters(self, m) -> None:
self.setup_mock(m, "merge")
response = self.client.post(
"/api/documents/merge/",
json.dumps(
{
"documents": [self.doc1.id, self.doc2.id],
"delete_originals": "not_boolean",
},
),
content_type="application/json",
@@ -1205,207 +1244,67 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
m.assert_not_called()
@mock.patch("documents.serialisers.bulk_edit.split")
def test_split(self, m) -> None:
self.setup_mock(m, "split")
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "split",
"parameters": {"pages": "1,2-4,5-6,7"},
},
),
content_type="application/json",
)
def test_bulk_edit_allows_legacy_file_methods_with_warning(self) -> None:
method_payloads = {
"delete": {},
"reprocess": {},
"rotate": {"degrees": 90},
"merge": {"metadata_document_id": self.doc2.id},
"edit_pdf": {"operations": [{"page": 1}]},
"remove_password": {"password": "secret"},
"split": {"pages": "1,2-4"},
"delete_pages": {"pages": [1, 2]},
}
self.assertEqual(response.status_code, status.HTTP_200_OK)
for version in (9, 10):
for method, parameters in method_payloads.items():
with self.subTest(method=method, version=version):
with mock.patch(
f"documents.views.bulk_edit.{method}",
) as mocked_method:
self.setup_mock(mocked_method, method)
with self.assertLogs("paperless.api", level="WARNING") as logs:
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id],
"method": method,
"parameters": parameters,
},
),
content_type="application/json",
headers={
"Accept": f"application/json; version={version}",
},
)
m.assert_called_once()
args, kwargs = m.call_args
self.assertCountEqual(args[0], [self.doc2.id])
self.assertEqual(kwargs["pages"], [[1], [2, 3, 4], [5, 6], [7]])
self.assertEqual(kwargs["user"], self.user)
self.assertEqual(response.status_code, status.HTTP_200_OK)
mocked_method.assert_called_once()
self.assertTrue(
any(
"Deprecated bulk_edit method" in entry
and f"'{method}'" in entry
for entry in logs.output
),
)
def test_split_invalid_params(self) -> None:
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "split",
"parameters": {}, # pages not specified
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"pages not specified", response.content)
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "split",
"parameters": {"pages": "1:7"}, # wrong format
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"invalid pages specified", response.content)
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [
self.doc1.id,
self.doc2.id,
], # only one document supported
"method": "split",
"parameters": {"pages": "1-2,3-7"}, # wrong format
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"Split method only supports one document", response.content)
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "split",
"parameters": {
"pages": "1",
"delete_originals": "notabool",
}, # not a bool
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"delete_originals must be a boolean", response.content)
@mock.patch("documents.serialisers.bulk_edit.delete_pages")
def test_delete_pages(self, m) -> None:
self.setup_mock(m, "delete_pages")
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "delete_pages",
"parameters": {"pages": [1, 2, 3, 4]},
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
m.assert_called_once()
args, kwargs = m.call_args
self.assertCountEqual(args[0], [self.doc2.id])
self.assertEqual(kwargs["pages"], [1, 2, 3, 4])
def test_delete_pages_invalid_params(self) -> None:
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [
self.doc1.id,
self.doc2.id,
], # only one document supported
"method": "delete_pages",
"parameters": {
"pages": [1, 2, 3, 4],
},
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(
b"Delete pages method only supports one document",
response.content,
)
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "delete_pages",
"parameters": {}, # pages not specified
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"pages not specified", response.content)
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "delete_pages",
"parameters": {"pages": "1-3"}, # not a list
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"pages must be a list", response.content)
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "delete_pages",
"parameters": {"pages": ["1-3"]}, # not ints
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"pages must be a list of integers", response.content)
@mock.patch("documents.serialisers.bulk_edit.edit_pdf")
@mock.patch("documents.views.bulk_edit.edit_pdf")
def test_edit_pdf(self, m) -> None:
self.setup_mock(m, "edit_pdf")
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/edit_pdf/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "edit_pdf",
"parameters": {
"operations": [{"page": 1}],
"source_mode": "explicit_selection",
},
"operations": [{"page": 1}],
"source_mode": "explicit_selection",
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
m.assert_called_once()
args, kwargs = m.call_args
self.assertCountEqual(args[0], [self.doc2.id])
@@ -1414,14 +1313,12 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
self.assertEqual(kwargs["user"], self.user)
def test_edit_pdf_invalid_params(self) -> None:
# multiple documents
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/edit_pdf/",
json.dumps(
{
"documents": [self.doc2.id, self.doc3.id],
"method": "edit_pdf",
"parameters": {"operations": [{"page": 1}]},
"operations": [{"page": 1}],
},
),
content_type="application/json",
@@ -1429,44 +1326,25 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"Edit PDF method only supports one document", response.content)
# no operations specified
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/edit_pdf/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "edit_pdf",
"parameters": {},
"operations": "not_a_list",
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"operations not specified", response.content)
self.assertIn(b"Expected a list of items", response.content)
# operations not a list
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/edit_pdf/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "edit_pdf",
"parameters": {"operations": "not_a_list"},
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"operations must be a list", response.content)
# invalid operation
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "edit_pdf",
"parameters": {"operations": ["invalid_operation"]},
"operations": ["invalid_operation"],
},
),
content_type="application/json",
@@ -1474,14 +1352,12 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"invalid operation entry", response.content)
# page not an int
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/edit_pdf/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "edit_pdf",
"parameters": {"operations": [{"page": "not_an_int"}]},
"operations": [{"page": "not_an_int"}],
},
),
content_type="application/json",
@@ -1489,14 +1365,12 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"page must be an integer", response.content)
# rotate not an int
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/edit_pdf/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "edit_pdf",
"parameters": {"operations": [{"page": 1, "rotate": "not_an_int"}]},
"operations": [{"page": 1, "rotate": "not_an_int"}],
},
),
content_type="application/json",
@@ -1504,14 +1378,12 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"rotate must be an integer", response.content)
# doc not an int
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/edit_pdf/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "edit_pdf",
"parameters": {"operations": [{"page": 1, "doc": "not_an_int"}]},
"operations": [{"page": 1, "doc": "not_an_int"}],
},
),
content_type="application/json",
@@ -1519,53 +1391,13 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"doc must be an integer", response.content)
# update_document not a boolean
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/edit_pdf/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "edit_pdf",
"parameters": {
"update_document": "not_a_bool",
"operations": [{"page": 1}],
},
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"update_document must be a boolean", response.content)
# include_metadata not a boolean
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "edit_pdf",
"parameters": {
"include_metadata": "not_a_bool",
"operations": [{"page": 1}],
},
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"include_metadata must be a boolean", response.content)
# update_document True but output would be multiple documents
response = self.client.post(
"/api/documents/bulk_edit/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "edit_pdf",
"parameters": {
"update_document": True,
"operations": [{"page": 1, "doc": 1}, {"page": 2, "doc": 2}],
},
"update_document": True,
"operations": [{"page": 1, "doc": 1}, {"page": 2, "doc": 2}],
},
),
content_type="application/json",
@@ -1576,17 +1408,13 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
response.content,
)
# invalid source mode
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/edit_pdf/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "edit_pdf",
"parameters": {
"operations": [{"page": 1}],
"source_mode": "not_a_mode",
},
"operations": [{"page": 1}],
"source_mode": "not_a_mode",
},
),
content_type="application/json",
@@ -1594,42 +1422,70 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"Invalid source_mode", response.content)
@mock.patch("documents.serialisers.bulk_edit.edit_pdf")
@mock.patch("documents.views.bulk_edit.edit_pdf")
def test_edit_pdf_page_out_of_bounds(self, m) -> None:
"""
GIVEN:
- API data for editing PDF is called
- The page number is out of bounds
WHEN:
- API is called
THEN:
- The API fails with a correct error code
"""
self.setup_mock(m, "edit_pdf")
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/edit_pdf/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "edit_pdf",
"parameters": {"operations": [{"page": 99}]},
"operations": [{"page": 99}],
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"out of bounds", response.content)
m.assert_not_called()
@mock.patch("documents.serialisers.bulk_edit.remove_password")
def test_remove_password(self, m) -> None:
self.setup_mock(m, "remove_password")
@mock.patch("documents.views.bulk_edit.edit_pdf")
def test_edit_pdf_insufficient_permissions(self, m) -> None:
self.doc1.owner = User.objects.get(username="temp_admin")
self.doc1.save()
user1 = User.objects.create(username="user1")
user1.user_permissions.add(*Permission.objects.all())
user1.save()
self.client.force_authenticate(user=user1)
self.setup_mock(m, "edit_pdf")
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/edit_pdf/",
json.dumps(
{
"documents": [self.doc1.id],
"operations": [{"page": 1}],
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
m.assert_not_called()
self.assertEqual(response.content, b"Insufficient permissions")
response = self.client.post(
"/api/documents/edit_pdf/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "remove_password",
"parameters": {"password": "secret", "update_document": True},
"operations": [{"page": 1}],
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
m.assert_called_once()
@mock.patch("documents.views.bulk_edit.remove_password")
def test_remove_password(self, m) -> None:
self.setup_mock(m, "remove_password")
response = self.client.post(
"/api/documents/remove_password/",
json.dumps(
{
"documents": [self.doc2.id],
"password": "secret",
"update_document": True,
},
),
content_type="application/json",
@@ -1641,36 +1497,69 @@ class TestBulkEditAPI(DirectoriesMixin, APITestCase):
self.assertCountEqual(args[0], [self.doc2.id])
self.assertEqual(kwargs["password"], "secret")
self.assertTrue(kwargs["update_document"])
self.assertEqual(kwargs["source_mode"], "latest_version")
self.assertEqual(kwargs["user"], self.user)
def test_remove_password_invalid_params(self) -> None:
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/remove_password/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "remove_password",
"parameters": {},
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"password not specified", response.content)
response = self.client.post(
"/api/documents/bulk_edit/",
"/api/documents/remove_password/",
json.dumps(
{
"documents": [self.doc2.id],
"method": "remove_password",
"parameters": {"password": 123},
"password": 123,
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_400_BAD_REQUEST)
self.assertIn(b"password must be a string", response.content)
@mock.patch("documents.views.bulk_edit.remove_password")
def test_remove_password_insufficient_permissions(self, m) -> None:
self.doc1.owner = User.objects.get(username="temp_admin")
self.doc1.save()
user1 = User.objects.create(username="user1")
user1.user_permissions.add(*Permission.objects.all())
user1.save()
self.client.force_authenticate(user=user1)
self.setup_mock(m, "remove_password")
response = self.client.post(
"/api/documents/remove_password/",
json.dumps(
{
"documents": [self.doc1.id],
"password": "secret",
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_403_FORBIDDEN)
m.assert_not_called()
self.assertEqual(response.content, b"Insufficient permissions")
response = self.client.post(
"/api/documents/remove_password/",
json.dumps(
{
"documents": [self.doc2.id],
"password": "secret",
},
),
content_type="application/json",
)
self.assertEqual(response.status_code, status.HTTP_200_OK)
m.assert_called_once()
@override_settings(AUDIT_LOG_ENABLED=True)
def test_bulk_edit_audit_log_enabled_simple_field(self) -> None:

View File

@@ -25,3 +25,39 @@ class TestApiSchema(APITestCase):
ui_response = self.client.get(self.ENDPOINT + "view/")
self.assertEqual(ui_response.status_code, status.HTTP_200_OK)
def test_schema_includes_dedicated_document_edit_endpoints(self) -> None:
schema_response = self.client.get(self.ENDPOINT)
self.assertEqual(schema_response.status_code, status.HTTP_200_OK)
paths = schema_response.data["paths"]
self.assertIn("/api/documents/delete/", paths)
self.assertIn("/api/documents/reprocess/", paths)
self.assertIn("/api/documents/rotate/", paths)
self.assertIn("/api/documents/merge/", paths)
self.assertIn("/api/documents/edit_pdf/", paths)
self.assertIn("/api/documents/remove_password/", paths)
def test_schema_bulk_edit_advertises_legacy_document_action_methods(self) -> None:
schema_response = self.client.get(self.ENDPOINT)
self.assertEqual(schema_response.status_code, status.HTTP_200_OK)
schema = schema_response.data["components"]["schemas"]
bulk_schema = schema["BulkEditRequest"]
method_schema = bulk_schema["properties"]["method"]
# drf-spectacular emits the enum as a referenced schema for this field
enum_ref = method_schema["allOf"][0]["$ref"].split("/")[-1]
advertised_methods = schema[enum_ref]["enum"]
for action_method in [
"delete",
"reprocess",
"rotate",
"merge",
"edit_pdf",
"remove_password",
"split",
"delete_pages",
]:
self.assertIn(action_method, advertised_methods)

View File

@@ -156,6 +156,46 @@ class TestDocument(TestCase):
)
self.assertEqual(doc.get_public_filename(), "2020-12-25 test")
def test_suggestion_content_uses_latest_version_content_for_root_documents(
self,
) -> None:
root = Document.objects.create(
title="root",
checksum="root",
mime_type="application/pdf",
content="outdated root content",
)
version = Document.objects.create(
title="v1",
checksum="v1",
mime_type="application/pdf",
root_document=root,
content="latest version content",
)
self.assertEqual(root.suggestion_content, version.content)
def test_content_length_is_per_document_row_for_versions(self) -> None:
root = Document.objects.create(
title="root",
checksum="root",
mime_type="application/pdf",
content="abc",
)
version = Document.objects.create(
title="v1",
checksum="v1",
mime_type="application/pdf",
root_document=root,
content="abcdefgh",
)
root.refresh_from_db()
version.refresh_from_db()
self.assertEqual(root.content_length, 3)
self.assertEqual(version.content_length, 8)
def test_suggestion_content() -> None:
"""

View File

@@ -48,6 +48,52 @@ class _TestMatchingBase(TestCase):
class TestMatching(_TestMatchingBase):
def test_matches_uses_latest_version_content_for_root_documents(self) -> None:
root = Document.objects.create(
title="root",
checksum="root",
mime_type="application/pdf",
content="root content without token",
)
Document.objects.create(
title="v1",
checksum="v1",
mime_type="application/pdf",
root_document=root,
content="latest version contains keyword",
)
tag = Tag.objects.create(
name="tag",
match="keyword",
matching_algorithm=Tag.MATCH_ANY,
)
self.assertTrue(matching.matches(tag, root))
def test_matches_does_not_fall_back_to_root_content_when_version_exists(
self,
) -> None:
root = Document.objects.create(
title="root",
checksum="root",
mime_type="application/pdf",
content="root contains keyword",
)
Document.objects.create(
title="v1",
checksum="v1",
mime_type="application/pdf",
root_document=root,
content="latest version without token",
)
tag = Tag.objects.create(
name="tag",
match="keyword",
matching_algorithm=Tag.MATCH_ANY,
)
self.assertFalse(matching.matches(tag, root))
def test_match_none(self) -> None:
self._test_matching(
"",

View File

@@ -6,8 +6,8 @@ SIDEBAR_VIEWS_VISIBLE_IDS_KEY = "sidebar_views_visible_ids"
class TestMigrateSavedViewVisibilityToUiSettings(TestMigrations):
migrate_from = "0013_document_root_document"
migrate_to = "0015_savedview_visibility_to_ui_settings"
migrate_from = "0013_alter_paperlesstask_task_name"
migrate_to = "0014_savedview_visibility_to_ui_settings"
def setUpBeforeMigration(self, apps) -> None:
User = apps.get_model("auth", "User")
@@ -132,8 +132,8 @@ class TestMigrateSavedViewVisibilityToUiSettings(TestMigrations):
class TestReverseMigrateSavedViewVisibilityFromUiSettings(TestMigrations):
migrate_from = "0015_savedview_visibility_to_ui_settings"
migrate_to = "0014_alter_paperlesstask_task_name"
migrate_from = "0014_savedview_visibility_to_ui_settings"
migrate_to = "0013_alter_paperlesstask_task_name"
def setUpBeforeMigration(self, apps) -> None:
User = apps.get_model("auth", "User")

View File

@@ -2,8 +2,8 @@ from documents.tests.utils import TestMigrations
class TestMigrateShareLinkBundlePermissions(TestMigrations):
migrate_from = "0007_document_content_length"
migrate_to = "0008_sharelinkbundle"
migrate_from = "0006_document_content_length"
migrate_to = "0007_sharelinkbundle"
def setUpBeforeMigration(self, apps) -> None:
User = apps.get_model("auth", "User")
@@ -24,8 +24,8 @@ class TestMigrateShareLinkBundlePermissions(TestMigrations):
class TestReverseMigrateShareLinkBundlePermissions(TestMigrations):
migrate_from = "0008_sharelinkbundle"
migrate_to = "0007_document_content_length"
migrate_from = "0007_sharelinkbundle"
migrate_to = "0006_document_content_length"
def setUpBeforeMigration(self, apps) -> None:
User = apps.get_model("auth", "User")

View File

@@ -9,9 +9,9 @@ from documents.parsers import get_default_file_extension
from documents.parsers import get_parser_class_for_mime_type
from documents.parsers import get_supported_file_extensions
from documents.parsers import is_file_ext_supported
from paperless.parsers.text import TextDocumentParser
from paperless.parsers.tika import TikaDocumentParser
from paperless_tesseract.parsers import RasterisedDocumentParser
from paperless_text.parsers import TextDocumentParser
from paperless_tika.parsers import TikaDocumentParser
class TestParserDiscovery(TestCase):

View File

@@ -7,7 +7,6 @@ import tempfile
import zipfile
from collections import defaultdict
from collections import deque
from contextlib import nullcontext
from datetime import datetime
from pathlib import Path
from time import mktime
@@ -177,14 +176,20 @@ from documents.serialisers import BulkEditObjectsSerializer
from documents.serialisers import BulkEditSerializer
from documents.serialisers import CorrespondentSerializer
from documents.serialisers import CustomFieldSerializer
from documents.serialisers import DeleteDocumentsSerializer
from documents.serialisers import DocumentListSerializer
from documents.serialisers import DocumentSerializer
from documents.serialisers import DocumentTypeSerializer
from documents.serialisers import DocumentVersionLabelSerializer
from documents.serialisers import DocumentVersionSerializer
from documents.serialisers import EditPdfDocumentsSerializer
from documents.serialisers import EmailSerializer
from documents.serialisers import MergeDocumentsSerializer
from documents.serialisers import NotesSerializer
from documents.serialisers import PostDocumentSerializer
from documents.serialisers import RemovePasswordDocumentsSerializer
from documents.serialisers import ReprocessDocumentsSerializer
from documents.serialisers import RotateDocumentsSerializer
from documents.serialisers import RunTaskViewSerializer
from documents.serialisers import SavedViewSerializer
from documents.serialisers import SearchResultSerializer
@@ -220,7 +225,6 @@ from paperless.celery import app as celery_app
from paperless.config import AIConfig
from paperless.config import GeneralConfig
from paperless.models import ApplicationConfiguration
from paperless.parsers import ParserProtocol
from paperless.serialisers import GroupSerializer
from paperless.serialisers import UserSerializer
from paperless.views import StandardPagination
@@ -1080,11 +1084,9 @@ class DocumentViewSet(
parser_class = get_parser_class_for_mime_type(mime_type)
if parser_class:
parser = parser_class(progress_callback=None, logging_group=None)
cm = parser if isinstance(parser, ParserProtocol) else nullcontext(parser)
try:
with cm:
return parser.extract_metadata(file, mime_type)
return parser.extract_metadata(file, mime_type)
except Exception: # pragma: no cover
logger.exception(f"Issue getting metadata for {file}")
# TODO: cover GPG errors, remove later.
@@ -1526,13 +1528,17 @@ class DocumentViewSet(
return Response(sorted(entries, key=lambda x: x["timestamp"], reverse=True))
@extend_schema(
operation_id="documents_email_document",
deprecated=True,
)
@action(
methods=["post"],
detail=True,
url_path="email",
permission_classes=[IsAuthenticated, ViewDocumentsPermissions],
)
# TODO: deprecated as of 2.19, remove in future release
# TODO: deprecated, remove with drop of support for API v9
def email_document(self, request, pk=None):
request_data = request.data.copy()
request_data.setlist("documents", [pk])
@@ -2118,6 +2124,125 @@ class SavedViewViewSet(BulkPermissionMixin, PassUserMixin, ModelViewSet):
ordering_fields = ("name",)
class DocumentOperationPermissionMixin(PassUserMixin):
permission_classes = (IsAuthenticated,)
parser_classes = (parsers.JSONParser,)
METHOD_NAMES_REQUIRING_USER = {
"split",
"merge",
"rotate",
"delete_pages",
"edit_pdf",
"remove_password",
}
def _has_document_permissions(
self,
*,
user: User,
documents: list[int],
method,
parameters: dict[str, Any],
) -> bool:
if user.is_superuser:
return True
document_objs = Document.objects.select_related("owner").filter(
pk__in=documents,
)
user_is_owner_of_all_documents = all(
(doc.owner == user or doc.owner is None) for doc in document_objs
)
# check global and object permissions for all documents
has_perms = user.has_perm("documents.change_document") and all(
has_perms_owner_aware(user, "change_document", doc) for doc in document_objs
)
# check ownership for methods that change original document
if (
(
has_perms
and method
in [
bulk_edit.set_permissions,
bulk_edit.delete,
bulk_edit.rotate,
bulk_edit.delete_pages,
bulk_edit.edit_pdf,
bulk_edit.remove_password,
]
)
or (
method in [bulk_edit.merge, bulk_edit.split]
and parameters.get("delete_originals")
)
or (method == bulk_edit.edit_pdf and parameters.get("update_document"))
):
has_perms = user_is_owner_of_all_documents
# check global add permissions for methods that create documents
if (
has_perms
and (
method in [bulk_edit.split, bulk_edit.merge]
or (
method in [bulk_edit.edit_pdf, bulk_edit.remove_password]
and not parameters.get("update_document")
)
)
and not user.has_perm("documents.add_document")
):
has_perms = False
# check global delete permissions for methods that delete documents
if (
has_perms
and (
method == bulk_edit.delete
or (
method in [bulk_edit.merge, bulk_edit.split]
and parameters.get("delete_originals")
)
)
and not user.has_perm("documents.delete_document")
):
has_perms = False
return has_perms
def _execute_document_action(
self,
*,
method,
validated_data: dict[str, Any],
operation_label: str,
):
documents = validated_data["documents"]
parameters = {k: v for k, v in validated_data.items() if k != "documents"}
user = self.request.user
if method.__name__ in self.METHOD_NAMES_REQUIRING_USER:
parameters["user"] = user
if not self._has_document_permissions(
user=user,
documents=documents,
method=method,
parameters=parameters,
):
return HttpResponseForbidden("Insufficient permissions")
try:
result = method(documents, **parameters)
return Response({"result": result})
except Exception as e:
logger.warning(f"An error occurred performing {operation_label}: {e!s}")
return HttpResponseBadRequest(
f"Error performing {operation_label}, check logs for more detail.",
)
@extend_schema_view(
post=extend_schema(
operation_id="bulk_edit",
@@ -2136,7 +2261,7 @@ class SavedViewViewSet(BulkPermissionMixin, PassUserMixin, ModelViewSet):
},
),
)
class BulkEditView(PassUserMixin):
class BulkEditView(DocumentOperationPermissionMixin):
MODIFIED_FIELD_BY_METHOD = {
"set_correspondent": "correspondent",
"set_document_type": "document_type",
@@ -2158,11 +2283,24 @@ class BulkEditView(PassUserMixin):
"remove_password": None,
}
permission_classes = (IsAuthenticated,)
serializer_class = BulkEditSerializer
parser_classes = (parsers.JSONParser,)
def post(self, request, *args, **kwargs):
request_method = request.data.get("method")
api_version = int(request.version or settings.REST_FRAMEWORK["DEFAULT_VERSION"])
# TODO: remove this and related backwards compatibility code when API v9 is dropped
if request_method in BulkEditSerializer.LEGACY_DOCUMENT_ACTION_METHODS:
endpoint = BulkEditSerializer.MOVED_DOCUMENT_ACTION_ENDPOINTS[
request_method
]
logger.warning(
"Deprecated bulk_edit method '%s' requested on API version %s. "
"Use '%s' instead.",
request_method,
api_version,
endpoint,
)
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
@@ -2170,82 +2308,15 @@ class BulkEditView(PassUserMixin):
method = serializer.validated_data.get("method")
parameters = serializer.validated_data.get("parameters")
documents = serializer.validated_data.get("documents")
if method in [
bulk_edit.split,
bulk_edit.merge,
bulk_edit.rotate,
bulk_edit.delete_pages,
bulk_edit.edit_pdf,
bulk_edit.remove_password,
]:
if method.__name__ in self.METHOD_NAMES_REQUIRING_USER:
parameters["user"] = user
if not user.is_superuser:
document_objs = Document.objects.select_related("owner").filter(
pk__in=documents,
)
user_is_owner_of_all_documents = all(
(doc.owner == user or doc.owner is None) for doc in document_objs
)
# check global and object permissions for all documents
has_perms = user.has_perm("documents.change_document") and all(
has_perms_owner_aware(user, "change_document", doc)
for doc in document_objs
)
# check ownership for methods that change original document
if (
(
has_perms
and method
in [
bulk_edit.set_permissions,
bulk_edit.delete,
bulk_edit.rotate,
bulk_edit.delete_pages,
bulk_edit.edit_pdf,
bulk_edit.remove_password,
]
)
or (
method in [bulk_edit.merge, bulk_edit.split]
and parameters["delete_originals"]
)
or (method == bulk_edit.edit_pdf and parameters["update_document"])
):
has_perms = user_is_owner_of_all_documents
# check global add permissions for methods that create documents
if (
has_perms
and (
method in [bulk_edit.split, bulk_edit.merge]
or (
method in [bulk_edit.edit_pdf, bulk_edit.remove_password]
and not parameters["update_document"]
)
)
and not user.has_perm("documents.add_document")
):
has_perms = False
# check global delete permissions for methods that delete documents
if (
has_perms
and (
method == bulk_edit.delete
or (
method in [bulk_edit.merge, bulk_edit.split]
and parameters["delete_originals"]
)
)
and not user.has_perm("documents.delete_document")
):
has_perms = False
if not has_perms:
return HttpResponseForbidden("Insufficient permissions")
if not self._has_document_permissions(
user=user,
documents=documents,
method=method,
parameters=parameters,
):
return HttpResponseForbidden("Insufficient permissions")
try:
modified_field = self.MODIFIED_FIELD_BY_METHOD.get(method.__name__, None)
@@ -2302,6 +2373,168 @@ class BulkEditView(PassUserMixin):
)
@extend_schema_view(
post=extend_schema(
operation_id="documents_rotate",
description="Rotate one or more documents",
responses={
200: inline_serializer(
name="RotateDocumentsResult",
fields={
"result": serializers.CharField(),
},
),
},
),
)
class RotateDocumentsView(DocumentOperationPermissionMixin):
serializer_class = RotateDocumentsSerializer
def post(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
return self._execute_document_action(
method=bulk_edit.rotate,
validated_data=serializer.validated_data,
operation_label="document rotate",
)
@extend_schema_view(
post=extend_schema(
operation_id="documents_merge",
description="Merge selected documents into a new document",
responses={
200: inline_serializer(
name="MergeDocumentsResult",
fields={
"result": serializers.CharField(),
},
),
},
),
)
class MergeDocumentsView(DocumentOperationPermissionMixin):
serializer_class = MergeDocumentsSerializer
def post(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
return self._execute_document_action(
method=bulk_edit.merge,
validated_data=serializer.validated_data,
operation_label="document merge",
)
@extend_schema_view(
post=extend_schema(
operation_id="documents_delete",
description="Move selected documents to trash",
responses={
200: inline_serializer(
name="DeleteDocumentsResult",
fields={
"result": serializers.CharField(),
},
),
},
),
)
class DeleteDocumentsView(DocumentOperationPermissionMixin):
serializer_class = DeleteDocumentsSerializer
def post(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
return self._execute_document_action(
method=bulk_edit.delete,
validated_data=serializer.validated_data,
operation_label="document delete",
)
@extend_schema_view(
post=extend_schema(
operation_id="documents_reprocess",
description="Reprocess selected documents",
responses={
200: inline_serializer(
name="ReprocessDocumentsResult",
fields={
"result": serializers.CharField(),
},
),
},
),
)
class ReprocessDocumentsView(DocumentOperationPermissionMixin):
serializer_class = ReprocessDocumentsSerializer
def post(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
return self._execute_document_action(
method=bulk_edit.reprocess,
validated_data=serializer.validated_data,
operation_label="document reprocess",
)
@extend_schema_view(
post=extend_schema(
operation_id="documents_edit_pdf",
description="Perform PDF edit operations on a selected document",
responses={
200: inline_serializer(
name="EditPdfDocumentsResult",
fields={
"result": serializers.CharField(),
},
),
},
),
)
class EditPdfDocumentsView(DocumentOperationPermissionMixin):
serializer_class = EditPdfDocumentsSerializer
def post(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
return self._execute_document_action(
method=bulk_edit.edit_pdf,
validated_data=serializer.validated_data,
operation_label="PDF edit",
)
@extend_schema_view(
post=extend_schema(
operation_id="documents_remove_password",
description="Remove password protection from selected PDFs",
responses={
200: inline_serializer(
name="RemovePasswordDocumentsResult",
fields={
"result": serializers.CharField(),
},
),
},
),
)
class RemovePasswordDocumentsView(DocumentOperationPermissionMixin):
serializer_class = RemovePasswordDocumentsSerializer
def post(self, request, *args, **kwargs):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
return self._execute_document_action(
method=bulk_edit.remove_password,
validated_data=serializer.validated_data,
operation_label="password removal",
)
@extend_schema_view(
post=extend_schema(
description="Upload a document via the API",

View File

@@ -1,12 +1,14 @@
import ipaddress
import logging
import socket
from urllib.parse import urlparse
import httpx
from celery import shared_task
from django.conf import settings
from paperless.network import format_host_for_url
from paperless.network import is_public_ip
from paperless.network import resolve_hostname_ips
from paperless.network import validate_outbound_http_url
logger = logging.getLogger("paperless.workflows.webhooks")
@@ -34,23 +36,19 @@ class WebhookTransport(httpx.HTTPTransport):
raise httpx.ConnectError("No hostname in request URL")
try:
addr_info = socket.getaddrinfo(hostname, None)
except socket.gaierror as e:
raise httpx.ConnectError(f"Could not resolve hostname: {hostname}") from e
ips = [info[4][0] for info in addr_info if info and info[4]]
if not ips:
raise httpx.ConnectError(f"Could not resolve hostname: {hostname}")
ips = resolve_hostname_ips(hostname)
except ValueError as e:
raise httpx.ConnectError(str(e)) from e
if not self.allow_internal:
for ip_str in ips:
if not WebhookTransport.is_public_ip(ip_str):
if not is_public_ip(ip_str):
raise httpx.ConnectError(
f"Connection blocked: {hostname} resolves to a non-public address",
)
ip_str = ips[0]
formatted_ip = self._format_ip_for_url(ip_str)
formatted_ip = format_host_for_url(ip_str)
new_headers = httpx.Headers(request.headers)
if "host" in new_headers:
@@ -69,40 +67,6 @@ class WebhookTransport(httpx.HTTPTransport):
return super().handle_request(request)
def _format_ip_for_url(self, ip: str) -> str:
"""
Format IP address for use in URL (wrap IPv6 in brackets)
"""
try:
ip_obj = ipaddress.ip_address(ip)
if ip_obj.version == 6:
return f"[{ip}]"
return ip
except ValueError:
return ip
@staticmethod
def is_public_ip(ip: str | int) -> bool:
try:
obj = ipaddress.ip_address(ip)
return not (
obj.is_private
or obj.is_loopback
or obj.is_link_local
or obj.is_multicast
or obj.is_unspecified
)
except ValueError: # pragma: no cover
return False
@staticmethod
def resolve_first_ip(host: str) -> str | None:
try:
info = socket.getaddrinfo(host, None)
return info[0][4][0] if info else None
except Exception: # pragma: no cover
return None
@shared_task(
retry_backoff=True,
@@ -118,21 +82,24 @@ def send_webhook(
*,
as_json: bool = False,
):
p = urlparse(url)
if p.scheme.lower() not in settings.WEBHOOKS_ALLOWED_SCHEMES or not p.hostname:
logger.warning("Webhook blocked: invalid scheme/hostname")
try:
parsed = validate_outbound_http_url(
url,
allowed_schemes=settings.WEBHOOKS_ALLOWED_SCHEMES,
allowed_ports=settings.WEBHOOKS_ALLOWED_PORTS,
# Internal-address checks happen in transport to preserve ConnectError behavior.
allow_internal=True,
)
except ValueError as e:
logger.warning("Webhook blocked: %s", e)
raise
hostname = parsed.hostname
if hostname is None: # pragma: no cover
raise ValueError("Invalid URL scheme or hostname.")
port = p.port or (443 if p.scheme == "https" else 80)
if (
len(settings.WEBHOOKS_ALLOWED_PORTS) > 0
and port not in settings.WEBHOOKS_ALLOWED_PORTS
):
logger.warning("Webhook blocked: port not permitted")
raise ValueError("Destination port not permitted.")
transport = WebhookTransport(
hostname=p.hostname,
hostname=hostname,
allow_internal=settings.WEBHOOKS_ALLOW_INTERNAL_REQUESTS,
)

File diff suppressed because it is too large Load Diff

View File

@@ -1,7 +1,6 @@
import os
from celery import Celery
from celery.signals import worker_process_init
# Set the default Django settings module for the 'celery' program.
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "paperless.settings")
@@ -16,19 +15,3 @@ app.config_from_object("django.conf:settings", namespace="CELERY")
# Load task modules from all registered Django apps.
app.autodiscover_tasks()
@worker_process_init.connect
def on_worker_process_init(**kwargs) -> None: # pragma: no cover
"""
Register built-in parsers eagerly in each Celery worker process.
This registers only the built-in parsers (no entrypoint discovery) so
that workers can begin consuming documents immediately. Entrypoint
discovery for third-party parsers is deferred to the first call of
get_parser_registry() inside a task, keeping worker_process_init
well within its 4-second timeout budget.
"""
from paperless.parsers.registry import init_builtin_parsers
init_builtin_parsers()

View File

@@ -7,6 +7,7 @@ from pathlib import Path
from django.conf import settings
from django.core.checks import Error
from django.core.checks import Tags
from django.core.checks import Warning
from django.core.checks import register
from django.db import connections
@@ -204,15 +205,16 @@ def audit_log_check(app_configs, **kwargs):
return result
@register()
@register(Tags.compatibility)
def check_v3_minimum_upgrade_version(
app_configs: object,
**kwargs: object,
) -> list[Error]:
"""Enforce that upgrades to v3 must start from v2.20.9.
"""
Enforce that upgrades to v3 must start from v2.20.10.
v3 squashes all prior migrations into 0001_squashed and 0002_squashed.
If a user skips v2.20.9, the data migration in 1075_workflowaction_order
If a user skips v2.20.10, the data migration in 1075_workflowaction_order
never runs and the squash may apply schema changes against an incomplete
database state.
"""
@@ -239,7 +241,7 @@ def check_v3_minimum_upgrade_version(
if {"0001_squashed", "0002_squashed"} & applied:
return []
# On v2.20.9 exactly — squash will pick up cleanly from here
# On v2.20.10 exactly — squash will pick up cleanly from here
if "1075_workflowaction_order" in applied:
return []
@@ -250,8 +252,8 @@ def check_v3_minimum_upgrade_version(
Error(
"Cannot upgrade to Paperless-ngx v3 from this version.",
hint=(
"Upgrading to v3 can only be performed from v2.20.9."
"Please upgrade to v2.20.9, run migrations, then upgrade to v3."
"Upgrading to v3 can only be performed from v2.20.10."
"Please upgrade to v2.20.10, run migrations, then upgrade to v3."
"See https://docs.paperless-ngx.com/setup/#upgrading for details."
),
id="paperless.E002",

View File

@@ -188,6 +188,7 @@ class AIConfig(BaseConfig):
llm_model: str = dataclasses.field(init=False)
llm_api_key: str = dataclasses.field(init=False)
llm_endpoint: str = dataclasses.field(init=False)
llm_allow_internal_endpoints: bool = dataclasses.field(init=False)
def __post_init__(self) -> None:
app_config = self._get_config_instance()
@@ -203,6 +204,7 @@ class AIConfig(BaseConfig):
self.llm_model = app_config.llm_model or settings.LLM_MODEL
self.llm_api_key = app_config.llm_api_key or settings.LLM_API_KEY
self.llm_endpoint = app_config.llm_endpoint or settings.LLM_ENDPOINT
self.llm_allow_internal_endpoints = settings.LLM_ALLOW_INTERNAL_ENDPOINTS
@property
def llm_index_enabled(self) -> bool:

76
src/paperless/network.py Normal file
View File

@@ -0,0 +1,76 @@
import ipaddress
import socket
from collections.abc import Collection
from urllib.parse import ParseResult
from urllib.parse import urlparse
def is_public_ip(ip: str | int) -> bool:
try:
obj = ipaddress.ip_address(ip)
return not (
obj.is_private
or obj.is_loopback
or obj.is_link_local
or obj.is_multicast
or obj.is_unspecified
)
except ValueError: # pragma: no cover
return False
def resolve_hostname_ips(hostname: str) -> list[str]:
try:
addr_info = socket.getaddrinfo(hostname, None)
except socket.gaierror as e:
raise ValueError(f"Could not resolve hostname: {hostname}") from e
ips = [info[4][0] for info in addr_info if info and info[4]]
if not ips:
raise ValueError(f"Could not resolve hostname: {hostname}")
return ips
def format_host_for_url(host: str) -> str:
"""
Format IP address for URL use (wrap IPv6 in brackets).
"""
try:
ip_obj = ipaddress.ip_address(host)
if ip_obj.version == 6:
return f"[{host}]"
return host
except ValueError:
return host
def validate_outbound_http_url(
url: str,
*,
allowed_schemes: Collection[str] = ("http", "https"),
allowed_ports: Collection[int] | None = None,
allow_internal: bool = False,
) -> ParseResult:
parsed = urlparse(url)
scheme = parsed.scheme.lower()
if scheme not in allowed_schemes or not parsed.hostname:
raise ValueError("Invalid URL scheme or hostname.")
default_port = 443 if scheme == "https" else 80
try:
port = parsed.port or default_port
except ValueError as e:
raise ValueError("Invalid URL scheme or hostname.") from e
if allowed_ports and port not in allowed_ports:
raise ValueError("Destination port not permitted.")
if not allow_internal:
for ip_str in resolve_hostname_ips(parsed.hostname):
if not is_public_ip(ip_str):
raise ValueError(
f"Connection blocked: {parsed.hostname} resolves to a non-public address",
)
return parsed

View File

@@ -1,379 +0,0 @@
"""
Public interface for the Paperless-ngx parser plugin system.
This module defines ParserProtocol — the structural contract that every
document parser must satisfy, whether it is a built-in parser shipped with
Paperless-ngx or a third-party parser installed via a Python entrypoint.
Phase 1/2 scope: only the Protocol is defined here. The transitional
DocumentParser ABC (Phase 3) and concrete built-in parsers (Phase 3+) will
be added in later phases, so there are intentionally no imports of parser
implementations here.
Usage example (third-party parser)::
from paperless.parsers import ParserProtocol
class MyParser:
name = "my-parser"
version = "1.0.0"
author = "Acme Corp"
url = "https://example.com/my-parser"
@classmethod
def supported_mime_types(cls) -> dict[str, str]:
return {"application/x-my-format": ".myf"}
@classmethod
def score(cls, mime_type, filename, path=None):
return 10
# … implement remaining protocol methods …
assert isinstance(MyParser(), ParserProtocol)
"""
from __future__ import annotations
from typing import TYPE_CHECKING
from typing import Protocol
from typing import Self
from typing import TypedDict
from typing import runtime_checkable
if TYPE_CHECKING:
import datetime
from pathlib import Path
from types import TracebackType
__all__ = [
"MetadataEntry",
"ParserProtocol",
]
class MetadataEntry(TypedDict):
"""A single metadata field extracted from a document.
All four keys are required. Values are always serialised to strings —
type-specific conversion (dates, integers, lists) is the responsibility
of the parser before returning.
"""
namespace: str
"""URI of the metadata namespace (e.g. 'http://ns.adobe.com/pdf/1.3/')."""
prefix: str
"""Conventional namespace prefix (e.g. 'pdf', 'xmp', 'dc')."""
key: str
"""Field name within the namespace (e.g. 'Author', 'CreateDate')."""
value: str
"""String representation of the field value."""
@runtime_checkable
class ParserProtocol(Protocol):
"""Structural contract for all Paperless-ngx document parsers.
Both built-in parsers and third-party plugins (discovered via the
"paperless_ngx.parsers" entrypoint group) must satisfy this Protocol.
Because it is decorated with runtime_checkable, isinstance(obj,
ParserProtocol) works at runtime based on method presence, which is
useful for validation in ParserRegistry.discover.
Parsers must expose four string attributes at the class level so the
registry can log attribution information without instantiating the parser:
name : str
Human-readable parser name (e.g. "Tesseract OCR").
version : str
Semantic version string (e.g. "1.2.3").
author : str
Author or organisation name.
url : str
URL for documentation, source code, or issue tracker.
"""
# ------------------------------------------------------------------
# Class-level identity (checked by the registry, not Protocol methods)
# ------------------------------------------------------------------
name: str
version: str
author: str
url: str
# ------------------------------------------------------------------
# Class methods
# ------------------------------------------------------------------
@classmethod
def supported_mime_types(cls) -> dict[str, str]:
"""Return a mapping of supported MIME types to preferred file extensions.
The keys are MIME type strings (e.g. "application/pdf"), and the
values are the preferred file extension including the leading dot
(e.g. ".pdf"). The registry uses this mapping both to decide whether
a parser is a candidate for a given file and to determine the default
extension when creating archive copies.
Returns
-------
dict[str, str]
{mime_type: extension} mapping — may be empty if the parser
has been temporarily disabled.
"""
...
@classmethod
def score(
cls,
mime_type: str,
filename: str,
path: Path | None = None,
) -> int | None:
"""Return a priority score for handling this file, or None to decline.
The registry calls this after confirming that the MIME type is in
supported_mime_types. Parsers may inspect filename and optionally
the file at path to refine their confidence level.
A higher score wins. Return None to explicitly decline handling a file
even though the MIME type is listed as supported (e.g. when a feature
flag is disabled, or a required service is not configured).
Parameters
----------
mime_type:
The detected MIME type of the file to be parsed.
filename:
The original filename, including extension.
path:
Optional filesystem path to the file. Parsers that need to
inspect file content (e.g. magic-byte sniffing) may use this.
May be None when scoring happens before the file is available locally.
Returns
-------
int | None
Priority score (higher wins), or None to decline.
"""
...
# ------------------------------------------------------------------
# Properties
# ------------------------------------------------------------------
@property
def can_produce_archive(self) -> bool:
"""Whether this parser can produce a searchable PDF archive copy.
If True, the consumption pipeline may request an archive version when
processing the document, subject to the ARCHIVE_FILE_GENERATION
setting. If False, only thumbnail and text extraction are performed.
"""
...
@property
def requires_pdf_rendition(self) -> bool:
"""Whether the parser must produce a PDF for the frontend to display.
True for formats the browser cannot display natively (e.g. DOCX, ODT).
When True, the pipeline always stores the PDF output regardless of the
ARCHIVE_FILE_GENERATION setting, since the original format cannot be
shown to the user.
"""
...
# ------------------------------------------------------------------
# Core parsing interface
# ------------------------------------------------------------------
def parse(
self,
document_path: Path,
mime_type: str,
*,
produce_archive: bool = True,
) -> None:
"""Parse document_path and populate internal state.
After a successful call, callers retrieve results via get_text,
get_date, and get_archive_path.
Parameters
----------
document_path:
Absolute path to the document file to parse.
mime_type:
Detected MIME type of the document.
produce_archive:
When True (the default) and can_produce_archive is also True,
the parser should produce a searchable PDF at the path returned
by get_archive_path. Pass False when only text extraction and
thumbnail generation are required and disk I/O should be minimised.
Raises
------
documents.parsers.ParseError
If parsing fails for any reason.
"""
...
# ------------------------------------------------------------------
# Result accessors
# ------------------------------------------------------------------
def get_text(self) -> str | None:
"""Return the plain-text content extracted during parse.
Returns
-------
str | None
Extracted text, or None if no text could be found.
"""
...
def get_date(self) -> datetime.datetime | None:
"""Return the document date detected during parse.
Returns
-------
datetime.datetime | None
Detected document date, or None if no date was found.
"""
...
def get_archive_path(self) -> Path | None:
"""Return the path to the generated archive PDF, or None.
Returns
-------
Path | None
Path to the searchable PDF archive, or None if no archive was
produced (e.g. because produce_archive=False or the parser does
not support archive generation).
"""
...
# ------------------------------------------------------------------
# Thumbnail and metadata
# ------------------------------------------------------------------
def get_thumbnail(self, document_path: Path, mime_type: str) -> Path:
"""Generate and return the path to a thumbnail image for the document.
May be called independently of parse. The returned path must point to
an existing WebP image file inside the parser's temporary working
directory.
Parameters
----------
document_path:
Absolute path to the source document.
mime_type:
Detected MIME type of the document.
Returns
-------
Path
Path to the generated thumbnail image (WebP format preferred).
"""
...
def get_page_count(
self,
document_path: Path,
mime_type: str,
) -> int | None:
"""Return the number of pages in the document, if determinable.
Parameters
----------
document_path:
Absolute path to the source document.
mime_type:
Detected MIME type of the document.
Returns
-------
int | None
Page count, or None if the parser cannot determine it.
"""
...
def extract_metadata(
self,
document_path: Path,
mime_type: str,
) -> list[MetadataEntry]:
"""Extract format-specific metadata from the document.
Called by the API view layer on demand — not during the consumption
pipeline. Results are returned to the frontend for per-file display.
For documents with an archive version, this method is called twice:
once for the original file (with its native MIME type) and once for
the archive file (with ``"application/pdf"``). Parsers that produce
archives should handle both cases.
Implementations must not raise. A failure to read metadata is not
fatal — log a warning and return whatever partial results were
collected, or ``[]`` if none.
Parameters
----------
document_path:
Absolute path to the file to extract metadata from.
mime_type:
MIME type of the file at ``document_path``. May be
``"application/pdf"`` when called for the archive version.
Returns
-------
list[MetadataEntry]
Zero or more metadata entries. Returns ``[]`` if no metadata
could be extracted or the format does not support it.
"""
...
# ------------------------------------------------------------------
# Context manager
# ------------------------------------------------------------------
def __enter__(self) -> Self:
"""Enter the parser context, returning the parser instance.
Implementations should perform any resource allocation here if not
done in __init__ (e.g. creating API clients or temp directories).
Returns
-------
Self
The parser instance itself.
"""
...
def __exit__(
self,
exc_type: type[BaseException] | None,
exc_val: BaseException | None,
exc_tb: TracebackType | None,
) -> None:
"""Exit the parser context and release all resources.
Implementations must clean up all temporary files and other resources
regardless of whether an exception occurred.
Parameters
----------
exc_type:
The exception class, or None if no exception was raised.
exc_val:
The exception instance, or None.
exc_tb:
The traceback, or None.
"""
...

View File

@@ -1,364 +0,0 @@
"""
Singleton registry that tracks all document parsers available to
Paperless-ngx — both built-ins shipped with the application and third-party
plugins installed via Python entrypoints.
Public surface
--------------
get_parser_registry
Lazy-initialise and return the shared ParserRegistry. This is the primary
entry point for production code.
init_builtin_parsers
Register built-in parsers only, without entrypoint discovery. Safe to
call from Celery worker_process_init where importing all entrypoints
would be wasteful or cause side effects.
reset_parser_registry
Reset module-level state. For tests only.
Entrypoint group
----------------
Third-party parsers must advertise themselves under the
"paperless_ngx.parsers" entrypoint group in their pyproject.toml::
[project.entry-points."paperless_ngx.parsers"]
my_parser = "my_package.parsers:MyParser"
The loaded class must expose the following attributes at the class level
(not just on instances) for the registry to accept it:
name, version, author, url, supported_mime_types (callable), score (callable).
"""
from __future__ import annotations
import logging
from importlib.metadata import entry_points
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from pathlib import Path
from paperless.parsers import ParserProtocol
logger = logging.getLogger("paperless.parsers.registry")
# ---------------------------------------------------------------------------
# Module-level singleton state
# ---------------------------------------------------------------------------
_registry: ParserRegistry | None = None
_discovery_complete: bool = False
# Attribute names that every registered external parser class must expose.
_REQUIRED_ATTRS: tuple[str, ...] = (
"name",
"version",
"author",
"url",
"supported_mime_types",
"score",
)
# ---------------------------------------------------------------------------
# Module-level accessor functions
# ---------------------------------------------------------------------------
def get_parser_registry() -> ParserRegistry:
"""Return the shared ParserRegistry instance.
On the first call this function:
1. Creates a new ParserRegistry.
2. Calls register_defaults to install built-in parsers.
3. Calls discover to load third-party plugins via importlib.metadata entrypoints.
4. Calls log_summary to emit a startup summary.
Subsequent calls return the same instance immediately.
Returns
-------
ParserRegistry
The shared registry singleton.
"""
global _registry, _discovery_complete
if _registry is None:
_registry = ParserRegistry()
_registry.register_defaults()
if not _discovery_complete:
_registry.discover()
_registry.log_summary()
_discovery_complete = True
return _registry
def init_builtin_parsers() -> None:
"""Register built-in parsers without performing entrypoint discovery.
Intended for use in Celery worker_process_init handlers where importing
all installed entrypoints would be wasteful, slow, or could produce
undesirable side effects. Entrypoint discovery (third-party plugins) is
deliberately not performed.
Safe to call multiple times — subsequent calls are no-ops.
Returns
-------
None
"""
global _registry
if _registry is None:
_registry = ParserRegistry()
_registry.register_defaults()
def reset_parser_registry() -> None:
"""Reset the module-level registry state to its initial values.
Resets _registry and _discovery_complete so the next call to
get_parser_registry will re-initialise everything from scratch.
FOR TESTS ONLY. Do not call this in production code — resetting the
registry mid-request causes all subsequent parser lookups to go through
discovery again, which is expensive and may have unexpected side effects
in multi-threaded environments.
Returns
-------
None
"""
global _registry, _discovery_complete
_registry = None
_discovery_complete = False
# ---------------------------------------------------------------------------
# Registry class
# ---------------------------------------------------------------------------
class ParserRegistry:
"""Registry that maps MIME types to the best available parser class.
Parsers are partitioned into two lists:
_builtins
Parser classes registered via register_builtin (populated by
register_defaults in Phase 3+).
_external
Parser classes loaded from installed Python entrypoints via discover.
When resolving a parser for a file, external parsers are evaluated
alongside built-in parsers using a uniform scoring mechanism. Both lists
are iterated together; the class with the highest score wins. If an
external parser wins, its attribution details are logged so users can
identify which third-party package handled their document.
"""
def __init__(self) -> None:
self._external: list[type[ParserProtocol]] = []
self._builtins: list[type[ParserProtocol]] = []
# ------------------------------------------------------------------
# Registration
# ------------------------------------------------------------------
def register_builtin(self, parser_class: type[ParserProtocol]) -> None:
"""Register a built-in parser class.
Built-in parsers are shipped with Paperless-ngx and are appended to
the _builtins list. They are never overridden by external parsers;
instead, scoring determines which parser wins for any given file.
Parameters
----------
parser_class:
The parser class to register. Must satisfy ParserProtocol.
"""
self._builtins.append(parser_class)
def register_defaults(self) -> None:
"""Register the built-in parsers that ship with Paperless-ngx.
Each parser that has been migrated to the new ParserProtocol interface
is registered here. Parsers are added in ascending weight order so
that log output is predictable; scoring determines which parser wins
at runtime regardless of registration order.
"""
from paperless.parsers.text import TextDocumentParser
self.register_builtin(TextDocumentParser)
# ------------------------------------------------------------------
# Discovery
# ------------------------------------------------------------------
def discover(self) -> None:
"""Load third-party parsers from the "paperless_ngx.parsers" entrypoint group.
For each advertised entrypoint the method:
1. Calls ep.load() to import the class.
2. Validates that the class exposes all required attributes.
3. On success, appends the class to _external and logs an info message.
4. On failure (import error or missing attributes), logs an appropriate
warning/error and continues to the next entrypoint.
Errors during discovery of a single parser do not prevent other parsers
from being loaded.
Returns
-------
None
"""
eps = entry_points(group="paperless_ngx.parsers")
for ep in eps:
try:
parser_class = ep.load()
except Exception:
logger.exception(
"Failed to load parser entrypoint '%s' — skipping.",
ep.name,
)
continue
missing = [
attr for attr in _REQUIRED_ATTRS if not hasattr(parser_class, attr)
]
if missing:
logger.warning(
"Parser loaded from entrypoint '%s' is missing required "
"attributes %r — skipping.",
ep.name,
missing,
)
continue
self._external.append(parser_class)
logger.info(
"Loaded third-party parser '%s' v%s by %s (entrypoint: '%s').",
parser_class.name,
parser_class.version,
parser_class.author,
ep.name,
)
# ------------------------------------------------------------------
# Summary logging
# ------------------------------------------------------------------
def log_summary(self) -> None:
"""Log a startup summary of all registered parsers.
Built-in parsers are listed first, followed by any external parsers
discovered from entrypoints. If no external parsers were found a
short informational message is logged instead of an empty list.
Returns
-------
None
"""
logger.info(
"Built-in parsers (%d):",
len(self._builtins),
)
for cls in self._builtins:
logger.info(
" [built-in] %s v%s%s",
getattr(cls, "name", repr(cls)),
getattr(cls, "version", "unknown"),
getattr(cls, "url", "built-in"),
)
if not self._external:
logger.info("No third-party parsers discovered.")
return
logger.info(
"Third-party parsers (%d):",
len(self._external),
)
for cls in self._external:
logger.info(
" [external] %s v%s by %s — report issues at %s",
getattr(cls, "name", repr(cls)),
getattr(cls, "version", "unknown"),
getattr(cls, "author", "unknown"),
getattr(cls, "url", "unknown"),
)
# ------------------------------------------------------------------
# Parser resolution
# ------------------------------------------------------------------
def get_parser_for_file(
self,
mime_type: str,
filename: str,
path: Path | None = None,
) -> type[ParserProtocol] | None:
"""Return the best parser class for the given file, or None.
All registered parsers (external first, then built-ins) are evaluated
against the file. A parser is eligible if mime_type appears in the dict
returned by its supported_mime_types classmethod, and its score
classmethod returns a non-None integer.
The parser with the highest score wins. When two parsers return the
same score, the one that appears earlier in the evaluation order wins
(external parsers are evaluated before built-ins, giving third-party
packages a chance to override defaults at equal priority).
When an external parser is selected, its identity is logged at INFO
level so operators can trace which package handled a document.
Parameters
----------
mime_type:
The detected MIME type of the file.
filename:
The original filename, including extension.
path:
Optional filesystem path to the file. Forwarded to each
parser's score method.
Returns
-------
type[ParserProtocol] | None
The winning parser class, or None if no parser can handle the file.
"""
best_score: int | None = None
best_parser: type[ParserProtocol] | None = None
# External parsers are placed first so that, at equal scores, an
# external parser wins over a built-in (first-seen policy).
for parser_class in (*self._external, *self._builtins):
if mime_type not in parser_class.supported_mime_types():
continue
score = parser_class.score(mime_type, filename, path)
if score is None:
continue
if best_score is None or score > best_score:
best_score = score
best_parser = parser_class
if best_parser is not None and best_parser in self._external:
logger.info(
"Document handled by third-party parser '%s' v%s%s",
getattr(best_parser, "name", repr(best_parser)),
getattr(best_parser, "version", "unknown"),
getattr(best_parser, "url", "unknown"),
)
return best_parser

View File

@@ -1,320 +0,0 @@
"""
Built-in plain-text document parser.
Handles text/plain, text/csv, and application/csv MIME types by reading the
file content directly. Thumbnails are generated by rendering a page-sized
WebP image from the first 100,000 characters using Pillow.
"""
from __future__ import annotations
import logging
import shutil
import tempfile
from pathlib import Path
from typing import TYPE_CHECKING
from typing import Self
from django.conf import settings
from PIL import Image
from PIL import ImageDraw
from PIL import ImageFont
from paperless.version import __full_version_str__
if TYPE_CHECKING:
import datetime
from types import TracebackType
from paperless.parsers import MetadataEntry
logger = logging.getLogger("paperless.parsing.text")
_SUPPORTED_MIME_TYPES: dict[str, str] = {
"text/plain": ".txt",
"text/csv": ".csv",
"application/csv": ".csv",
}
class TextDocumentParser:
"""Parse plain-text documents (txt, csv) for Paperless-ngx.
This parser reads the file content directly as UTF-8 text and renders a
simple thumbnail using Pillow. It does not perform OCR and does not
produce a searchable PDF archive copy.
Class attributes
----------------
name : str
Human-readable parser name.
version : str
Semantic version string, kept in sync with Paperless-ngx releases.
author : str
Maintainer name.
url : str
Issue tracker / source URL.
"""
name: str = "Paperless-ngx Text Parser"
version: str = __full_version_str__
author: str = "Paperless-ngx Contributors"
url: str = "https://github.com/paperless-ngx/paperless-ngx"
# ------------------------------------------------------------------
# Class methods
# ------------------------------------------------------------------
@classmethod
def supported_mime_types(cls) -> dict[str, str]:
"""Return the MIME types this parser handles.
Returns
-------
dict[str, str]
Mapping of MIME type to preferred file extension.
"""
return _SUPPORTED_MIME_TYPES
@classmethod
def score(
cls,
mime_type: str,
filename: str,
path: Path | None = None,
) -> int | None:
"""Return the priority score for handling this file.
Parameters
----------
mime_type:
Detected MIME type of the file.
filename:
Original filename including extension.
path:
Optional filesystem path. Not inspected by this parser.
Returns
-------
int | None
10 if the MIME type is supported, otherwise None.
"""
if mime_type in _SUPPORTED_MIME_TYPES:
return 10
return None
# ------------------------------------------------------------------
# Properties
# ------------------------------------------------------------------
@property
def can_produce_archive(self) -> bool:
"""Whether this parser can produce a searchable PDF archive copy.
Returns
-------
bool
Always False — the text parser does not produce a PDF archive.
"""
return False
@property
def requires_pdf_rendition(self) -> bool:
"""Whether the parser must produce a PDF for the frontend to display.
Returns
-------
bool
Always False — plain text files are displayable as-is.
"""
return False
# ------------------------------------------------------------------
# Lifecycle
# ------------------------------------------------------------------
def __init__(self, logging_group: object = None) -> None:
settings.SCRATCH_DIR.mkdir(parents=True, exist_ok=True)
self._tempdir = Path(
tempfile.mkdtemp(prefix="paperless-", dir=settings.SCRATCH_DIR),
)
self._text: str | None = None
def __enter__(self) -> Self:
return self
def __exit__(
self,
exc_type: type[BaseException] | None,
exc_val: BaseException | None,
exc_tb: TracebackType | None,
) -> None:
logger.debug("Cleaning up temporary directory %s", self._tempdir)
shutil.rmtree(self._tempdir, ignore_errors=True)
# ------------------------------------------------------------------
# Core parsing interface
# ------------------------------------------------------------------
def parse(
self,
document_path: Path,
mime_type: str,
*,
produce_archive: bool = True,
) -> None:
"""Read the document and store its text content.
Parameters
----------
document_path:
Absolute path to the text file.
mime_type:
Detected MIME type of the document.
produce_archive:
Ignored — this parser never produces a PDF archive.
Raises
------
documents.parsers.ParseError
If the file cannot be read.
"""
self._text = self._read_text(document_path)
# ------------------------------------------------------------------
# Result accessors
# ------------------------------------------------------------------
def get_text(self) -> str | None:
"""Return the plain-text content extracted during parse.
Returns
-------
str | None
Extracted text, or None if parse has not been called yet.
"""
return self._text
def get_date(self) -> datetime.datetime | None:
"""Return the document date detected during parse.
Returns
-------
datetime.datetime | None
Always None — the text parser does not detect dates.
"""
return None
def get_archive_path(self) -> Path | None:
"""Return the path to a generated archive PDF, or None.
Returns
-------
Path | None
Always None — the text parser does not produce a PDF archive.
"""
return None
# ------------------------------------------------------------------
# Thumbnail and metadata
# ------------------------------------------------------------------
def get_thumbnail(self, document_path: Path, mime_type: str) -> Path:
"""Render the first portion of the document as a WebP thumbnail.
Parameters
----------
document_path:
Absolute path to the source document.
mime_type:
Detected MIME type of the document.
Returns
-------
Path
Path to the generated WebP thumbnail inside the temporary directory.
"""
max_chars = 100_000
file_size_limit = 50 * 1024 * 1024
if document_path.stat().st_size > file_size_limit:
text = "[File too large to preview]"
else:
with Path(document_path).open("r", encoding="utf-8", errors="replace") as f:
text = f.read(max_chars)
img = Image.new("RGB", (500, 700), color="white")
draw = ImageDraw.Draw(img)
font = ImageFont.truetype(
font=settings.THUMBNAIL_FONT_NAME,
size=20,
layout_engine=ImageFont.Layout.BASIC,
)
draw.multiline_text((5, 5), text, font=font, fill="black", spacing=4)
out_path = self._tempdir / "thumb.webp"
img.save(out_path, format="WEBP")
return out_path
def get_page_count(
self,
document_path: Path,
mime_type: str,
) -> int | None:
"""Return the number of pages in the document.
Parameters
----------
document_path:
Absolute path to the source document.
mime_type:
Detected MIME type of the document.
Returns
-------
int | None
Always None — page count is not meaningful for plain text.
"""
return None
def extract_metadata(
self,
document_path: Path,
mime_type: str,
) -> list[MetadataEntry]:
"""Extract format-specific metadata from the document.
Returns
-------
list[MetadataEntry]
Always ``[]`` — plain text files carry no structured metadata.
"""
return []
# ------------------------------------------------------------------
# Private helpers
# ------------------------------------------------------------------
def _read_text(self, filepath: Path) -> str:
"""Read file content, replacing invalid UTF-8 bytes rather than failing.
Parameters
----------
filepath:
Path to the file to read.
Returns
-------
str
File content as a string.
"""
try:
return filepath.read_text(encoding="utf-8")
except UnicodeDecodeError as exc:
logger.warning(
"Unicode error reading %s, replacing bad bytes: %s",
filepath,
exc,
)
return filepath.read_bytes().decode("utf-8", errors="replace")

View File

@@ -1,440 +0,0 @@
"""
Built-in Tika document parser.
Handles Office documents (DOCX, ODT, XLS, XLSX, PPT, PPTX, RTF, etc.) by
sending them to an Apache Tika server for text extraction and a Gotenberg
server for PDF conversion. Because the source formats cannot be rendered by
a browser natively, the parser always produces a PDF rendition for display.
"""
from __future__ import annotations
import logging
import shutil
import tempfile
from contextlib import ExitStack
from pathlib import Path
from typing import TYPE_CHECKING
from typing import Self
import httpx
from django.conf import settings
from django.utils import timezone
from gotenberg_client import GotenbergClient
from gotenberg_client.options import PdfAFormat
from tika_client import TikaClient
from documents.parsers import ParseError
from documents.parsers import make_thumbnail_from_pdf
from paperless.config import OutputTypeConfig
from paperless.models import OutputTypeChoices
from paperless.version import __full_version_str__
if TYPE_CHECKING:
import datetime
from types import TracebackType
from paperless.parsers import MetadataEntry
logger = logging.getLogger("paperless.parsing.tika")
_SUPPORTED_MIME_TYPES: dict[str, str] = {
"application/msword": ".doc",
"application/vnd.openxmlformats-officedocument.wordprocessingml.document": ".docx",
"application/vnd.ms-excel": ".xls",
"application/vnd.openxmlformats-officedocument.spreadsheetml.sheet": ".xlsx",
"application/vnd.ms-powerpoint": ".ppt",
"application/vnd.openxmlformats-officedocument.presentationml.presentation": ".pptx",
"application/vnd.openxmlformats-officedocument.presentationml.slideshow": ".ppsx",
"application/vnd.oasis.opendocument.presentation": ".odp",
"application/vnd.oasis.opendocument.spreadsheet": ".ods",
"application/vnd.oasis.opendocument.text": ".odt",
"application/vnd.oasis.opendocument.graphics": ".odg",
"text/rtf": ".rtf",
}
class TikaDocumentParser:
"""Parse Office documents via Apache Tika and Gotenberg for Paperless-ngx.
Text extraction is handled by the Tika server. PDF conversion for display
is handled by Gotenberg (LibreOffice route). Because the source formats
cannot be rendered by a browser natively, ``requires_pdf_rendition`` is
True and the PDF is always produced regardless of the ``produce_archive``
flag passed to ``parse``.
Both ``TikaClient`` and ``GotenbergClient`` are opened once in
``__enter__`` via an ``ExitStack`` and shared across ``parse``,
``extract_metadata``, and ``_convert_to_pdf`` calls, then closed via
``ExitStack.close()`` in ``__exit__``. The parser must always be used
as a context manager.
Class attributes
----------------
name : str
Human-readable parser name.
version : str
Semantic version string, kept in sync with Paperless-ngx releases.
author : str
Maintainer name.
url : str
Issue tracker / source URL.
"""
name: str = "Paperless-ngx Tika Parser"
version: str = __full_version_str__
author: str = "Paperless-ngx Contributors"
url: str = "https://github.com/paperless-ngx/paperless-ngx"
# ------------------------------------------------------------------
# Class methods
# ------------------------------------------------------------------
@classmethod
def supported_mime_types(cls) -> dict[str, str]:
"""Return the MIME types this parser handles.
Returns
-------
dict[str, str]
Mapping of MIME type to preferred file extension.
"""
return _SUPPORTED_MIME_TYPES
@classmethod
def score(
cls,
mime_type: str,
filename: str,
path: Path | None = None,
) -> int | None:
"""Return the priority score for handling this file.
Returns ``None`` when Tika integration is disabled so the registry
skips this parser entirely.
Parameters
----------
mime_type:
Detected MIME type of the file.
filename:
Original filename including extension.
path:
Optional filesystem path. Not inspected by this parser.
Returns
-------
int | None
10 if TIKA_ENABLED and the MIME type is supported, otherwise None.
"""
if not settings.TIKA_ENABLED:
return None
if mime_type in _SUPPORTED_MIME_TYPES:
return 10
return None
# ------------------------------------------------------------------
# Properties
# ------------------------------------------------------------------
@property
def can_produce_archive(self) -> bool:
"""Whether this parser can produce a searchable PDF archive copy.
Returns
-------
bool
Always False — Tika produces a display PDF, not an OCR archive.
"""
return False
@property
def requires_pdf_rendition(self) -> bool:
"""Whether the parser must produce a PDF for the frontend to display.
Returns
-------
bool
Always True — Office formats cannot be rendered natively in a
browser, so a PDF conversion is always required for display.
"""
return True
# ------------------------------------------------------------------
# Lifecycle
# ------------------------------------------------------------------
def __init__(self, logging_group: object = None) -> None:
settings.SCRATCH_DIR.mkdir(parents=True, exist_ok=True)
self._tempdir = Path(
tempfile.mkdtemp(prefix="paperless-", dir=settings.SCRATCH_DIR),
)
self._text: str | None = None
self._date: datetime.datetime | None = None
self._archive_path: Path | None = None
self._exit_stack = ExitStack()
self._tika_client: TikaClient | None = None
self._gotenberg_client: GotenbergClient | None = None
def __enter__(self) -> Self:
self._tika_client = self._exit_stack.enter_context(
TikaClient(
tika_url=settings.TIKA_ENDPOINT,
timeout=settings.CELERY_TASK_TIME_LIMIT,
),
)
self._gotenberg_client = self._exit_stack.enter_context(
GotenbergClient(
host=settings.TIKA_GOTENBERG_ENDPOINT,
timeout=settings.CELERY_TASK_TIME_LIMIT,
),
)
return self
def __exit__(
self,
exc_type: type[BaseException] | None,
exc_val: BaseException | None,
exc_tb: TracebackType | None,
) -> None:
self._exit_stack.close()
logger.debug("Cleaning up temporary directory %s", self._tempdir)
shutil.rmtree(self._tempdir, ignore_errors=True)
# ------------------------------------------------------------------
# Core parsing interface
# ------------------------------------------------------------------
def parse(
self,
document_path: Path,
mime_type: str,
*,
produce_archive: bool = True,
) -> None:
"""Send the document to Tika for text extraction and Gotenberg for PDF.
Because ``requires_pdf_rendition`` is True the PDF conversion is
always performed — the ``produce_archive`` flag is intentionally
ignored.
Parameters
----------
document_path:
Absolute path to the document file to parse.
mime_type:
Detected MIME type of the document.
produce_archive:
Accepted for protocol compatibility but ignored; the PDF rendition
is always produced since the source format cannot be displayed
natively in the browser.
Raises
------
documents.parsers.ParseError
If Tika or Gotenberg returns an error.
"""
if TYPE_CHECKING:
assert self._tika_client is not None
logger.info("Sending %s to Tika server", document_path)
try:
try:
parsed = self._tika_client.tika.as_text.from_file(
document_path,
mime_type,
)
except httpx.HTTPStatusError as err:
# Workaround https://issues.apache.org/jira/browse/TIKA-4110
# Tika fails with some files as multi-part form data
if err.response.status_code == httpx.codes.INTERNAL_SERVER_ERROR:
parsed = self._tika_client.tika.as_text.from_buffer(
document_path.read_bytes(),
mime_type,
)
else: # pragma: no cover
raise
except Exception as err:
raise ParseError(
f"Could not parse {document_path} with tika server at "
f"{settings.TIKA_ENDPOINT}: {err}",
) from err
self._text = parsed.content
if self._text is not None:
self._text = self._text.strip()
self._date = parsed.created
if self._date is not None and timezone.is_naive(self._date):
self._date = timezone.make_aware(self._date)
# Always convert — requires_pdf_rendition=True means the browser
# cannot display the source format natively.
self._archive_path = self._convert_to_pdf(document_path)
# ------------------------------------------------------------------
# Result accessors
# ------------------------------------------------------------------
def get_text(self) -> str | None:
"""Return the plain-text content extracted during parse.
Returns
-------
str | None
Extracted text, or None if parse has not been called yet.
"""
return self._text
def get_date(self) -> datetime.datetime | None:
"""Return the document date detected during parse.
Returns
-------
datetime.datetime | None
Creation date from Tika metadata, or None if not detected.
"""
return self._date
def get_archive_path(self) -> Path | None:
"""Return the path to the generated PDF rendition, or None.
Returns
-------
Path | None
Path to the PDF produced by Gotenberg, or None if parse has not
been called yet.
"""
return self._archive_path
# ------------------------------------------------------------------
# Thumbnail and metadata
# ------------------------------------------------------------------
def get_thumbnail(self, document_path: Path, mime_type: str) -> Path:
"""Generate a thumbnail from the PDF rendition of the document.
Converts the document to PDF first if not already done.
Parameters
----------
document_path:
Absolute path to the source document.
mime_type:
Detected MIME type of the document.
Returns
-------
Path
Path to the generated WebP thumbnail inside the temporary directory.
"""
if self._archive_path is None:
self._archive_path = self._convert_to_pdf(document_path)
return make_thumbnail_from_pdf(self._archive_path, self._tempdir)
def get_page_count(
self,
document_path: Path,
mime_type: str,
) -> int | None:
"""Return the number of pages in the document.
Returns
-------
int | None
Always None — page count is not available from Tika.
"""
return None
def extract_metadata(
self,
document_path: Path,
mime_type: str,
) -> list[MetadataEntry]:
"""Extract format-specific metadata via the Tika metadata endpoint.
Returns
-------
list[MetadataEntry]
All key/value pairs returned by Tika, or ``[]`` on error.
"""
if TYPE_CHECKING:
assert self._tika_client is not None
try:
parsed = self._tika_client.metadata.from_file(document_path, mime_type)
return [
{
"namespace": "",
"prefix": "",
"key": key,
"value": parsed.data[key],
}
for key in parsed.data
]
except Exception as e:
logger.warning(
"Error while fetching document metadata for %s: %s",
document_path,
e,
)
return []
# ------------------------------------------------------------------
# Private helpers
# ------------------------------------------------------------------
def _convert_to_pdf(self, document_path: Path) -> Path:
"""Convert the document to PDF using Gotenberg's LibreOffice route.
Parameters
----------
document_path:
Absolute path to the source document.
Returns
-------
Path
Path to the generated PDF inside the temporary directory.
Raises
------
documents.parsers.ParseError
If Gotenberg returns an error.
"""
if TYPE_CHECKING:
assert self._gotenberg_client is not None
pdf_path = self._tempdir / "convert.pdf"
logger.info("Converting %s to PDF as %s", document_path, pdf_path)
with self._gotenberg_client.libre_office.to_pdf() as route:
# Set the output format of the resulting PDF.
# OutputTypeConfig reads the database-stored ApplicationConfiguration
# first, then falls back to the PAPERLESS_OCR_OUTPUT_TYPE env var.
output_type = OutputTypeConfig().output_type
if output_type in {
OutputTypeChoices.PDF_A,
OutputTypeChoices.PDF_A2,
}:
route.pdf_format(PdfAFormat.A2b)
elif output_type == OutputTypeChoices.PDF_A1:
logger.warning(
"Gotenberg does not support PDF/A-1a, choosing PDF/A-2b instead",
)
route.pdf_format(PdfAFormat.A2b)
elif output_type == OutputTypeChoices.PDF_A3:
route.pdf_format(PdfAFormat.A3b)
route.convert(document_path)
try:
response = route.run()
pdf_path.write_bytes(response.content)
return pdf_path
except Exception as err:
raise ParseError(
f"Error while converting document to PDF: {err}",
) from err

View File

@@ -6,6 +6,7 @@ from allauth.mfa.models import Authenticator
from allauth.mfa.totp.internal.auth import TOTP
from allauth.socialaccount.models import SocialAccount
from allauth.socialaccount.models import SocialApp
from django.conf import settings
from django.contrib.auth.models import Group
from django.contrib.auth.models import Permission
from django.contrib.auth.models import User
@@ -15,6 +16,7 @@ from rest_framework import serializers
from rest_framework.authtoken.serializers import AuthTokenSerializer
from paperless.models import ApplicationConfiguration
from paperless.network import validate_outbound_http_url
from paperless.validators import reject_dangerous_svg
from paperless_mail.serialisers import ObfuscatedPasswordField
@@ -236,6 +238,20 @@ class ApplicationConfigurationSerializer(serializers.ModelSerializer):
reject_dangerous_svg(file)
return file
def validate_llm_endpoint(self, value: str | None) -> str | None:
if not value:
return value
try:
validate_outbound_http_url(
value,
allow_internal=settings.LLM_ALLOW_INTERNAL_ENDPOINTS,
)
except ValueError as e:
raise serializers.ValidationError(str(e)) from e
return value
class Meta:
model = ApplicationConfiguration
fields = "__all__"

View File

@@ -1112,3 +1112,7 @@ LLM_BACKEND = os.getenv("PAPERLESS_AI_LLM_BACKEND") # "ollama" or "openai"
LLM_MODEL = os.getenv("PAPERLESS_AI_LLM_MODEL")
LLM_API_KEY = os.getenv("PAPERLESS_AI_LLM_API_KEY")
LLM_ENDPOINT = os.getenv("PAPERLESS_AI_LLM_ENDPOINT")
LLM_ALLOW_INTERNAL_ENDPOINTS = get_bool_from_env(
"PAPERLESS_AI_LLM_ALLOW_INTERNAL_ENDPOINTS",
"true",
)

View File

@@ -209,7 +209,6 @@ def parse_db_settings(data_dir: Path) -> dict[str, dict[str, Any]]:
engine = get_choice_from_env(
"PAPERLESS_DBENGINE",
{"sqlite", "postgresql", "mariadb"},
default="sqlite",
)
except ValueError:
# MariaDB users already had to set PAPERLESS_DBENGINE, so it was picked up above

View File

@@ -1,48 +0,0 @@
"""
Fixtures defined here are available to every test module under
src/paperless/tests/ (including sub-packages such as parsers/).
Session-scoped fixtures for the shared samples directory live here so
sub-package conftest files can reference them without duplicating path logic.
Parser-specific fixtures (concrete parser instances, format-specific sample
files) live in paperless/tests/parsers/conftest.py.
"""
from __future__ import annotations
from pathlib import Path
from typing import TYPE_CHECKING
import pytest
from paperless.parsers.registry import reset_parser_registry
if TYPE_CHECKING:
from collections.abc import Generator
@pytest.fixture(scope="session")
def samples_dir() -> Path:
"""Absolute path to the shared parser sample files directory.
Sub-package conftest files derive format-specific paths from this root,
e.g. ``samples_dir / "text" / "test.txt"``.
Returns
-------
Path
Directory containing all sample documents used by parser tests.
"""
return (Path(__file__).parent / "samples").resolve()
@pytest.fixture(autouse=True)
def clean_registry() -> Generator[None, None, None]:
"""Reset the parser registry before and after every test.
This prevents registry state from leaking between tests that call
get_parser_registry() or init_builtin_parsers().
"""
reset_parser_registry()
yield
reset_parser_registry()

View File

@@ -1,160 +0,0 @@
"""
Parser fixtures that are used across multiple test modules in this package
are defined here. Format-specific sample-file fixtures are grouped by parser
so it is easy to see which files belong to which test module.
"""
from __future__ import annotations
from typing import TYPE_CHECKING
import pytest
from paperless.parsers.text import TextDocumentParser
from paperless.parsers.tika import TikaDocumentParser
if TYPE_CHECKING:
from collections.abc import Generator
from pathlib import Path
# ------------------------------------------------------------------
# Text parser sample files
# ------------------------------------------------------------------
@pytest.fixture(scope="session")
def text_samples_dir(samples_dir: Path) -> Path:
"""Absolute path to the text parser sample files directory.
Returns
-------
Path
``<samples_dir>/text/``
"""
return samples_dir / "text"
@pytest.fixture(scope="session")
def sample_txt_file(text_samples_dir: Path) -> Path:
"""Path to a valid UTF-8 plain-text sample file.
Returns
-------
Path
Absolute path to ``text/test.txt``.
"""
return text_samples_dir / "test.txt"
@pytest.fixture(scope="session")
def malformed_txt_file(text_samples_dir: Path) -> Path:
"""Path to a text file containing invalid UTF-8 bytes.
Returns
-------
Path
Absolute path to ``text/decode_error.txt``.
"""
return text_samples_dir / "decode_error.txt"
# ------------------------------------------------------------------
# Text parser instance
# ------------------------------------------------------------------
@pytest.fixture()
def text_parser() -> Generator[TextDocumentParser, None, None]:
"""Yield a TextDocumentParser and clean up its temporary directory afterwards.
Yields
------
TextDocumentParser
A ready-to-use parser instance.
"""
with TextDocumentParser() as parser:
yield parser
# ------------------------------------------------------------------
# Tika parser sample files
# ------------------------------------------------------------------
@pytest.fixture(scope="session")
def tika_samples_dir(samples_dir: Path) -> Path:
"""Absolute path to the Tika parser sample files directory.
Returns
-------
Path
``<samples_dir>/tika/``
"""
return samples_dir / "tika"
@pytest.fixture(scope="session")
def sample_odt_file(tika_samples_dir: Path) -> Path:
"""Path to a sample ODT file.
Returns
-------
Path
Absolute path to ``tika/sample.odt``.
"""
return tika_samples_dir / "sample.odt"
@pytest.fixture(scope="session")
def sample_docx_file(tika_samples_dir: Path) -> Path:
"""Path to a sample DOCX file.
Returns
-------
Path
Absolute path to ``tika/sample.docx``.
"""
return tika_samples_dir / "sample.docx"
@pytest.fixture(scope="session")
def sample_doc_file(tika_samples_dir: Path) -> Path:
"""Path to a sample DOC file.
Returns
-------
Path
Absolute path to ``tika/sample.doc``.
"""
return tika_samples_dir / "sample.doc"
@pytest.fixture(scope="session")
def sample_broken_odt(tika_samples_dir: Path) -> Path:
"""Path to a broken ODT file that triggers the multi-part fallback.
Returns
-------
Path
Absolute path to ``tika/multi-part-broken.odt``.
"""
return tika_samples_dir / "multi-part-broken.odt"
# ------------------------------------------------------------------
# Tika parser instance
# ------------------------------------------------------------------
@pytest.fixture()
def tika_parser() -> Generator[TikaDocumentParser, None, None]:
"""Yield a TikaDocumentParser and clean up its temporary directory afterwards.
Yields
------
TikaDocumentParser
A ready-to-use parser instance.
"""
with TikaDocumentParser() as parser:
yield parser

View File

@@ -1,256 +0,0 @@
"""
Tests for paperless.parsers.text.TextDocumentParser.
All tests use the context-manager protocol for parser lifecycle. Sample
files are provided by session-scoped fixtures defined in conftest.py.
"""
from __future__ import annotations
import tempfile
from pathlib import Path
import pytest
from paperless.parsers import ParserProtocol
from paperless.parsers.text import TextDocumentParser
class TestTextParserProtocol:
"""Verify that TextDocumentParser satisfies the ParserProtocol contract."""
def test_isinstance_satisfies_protocol(
self,
text_parser: TextDocumentParser,
) -> None:
assert isinstance(text_parser, ParserProtocol)
def test_class_attributes_present(self) -> None:
assert isinstance(TextDocumentParser.name, str) and TextDocumentParser.name
assert (
isinstance(TextDocumentParser.version, str) and TextDocumentParser.version
)
assert isinstance(TextDocumentParser.author, str) and TextDocumentParser.author
assert isinstance(TextDocumentParser.url, str) and TextDocumentParser.url
def test_supported_mime_types_returns_dict(self) -> None:
mime_types = TextDocumentParser.supported_mime_types()
assert isinstance(mime_types, dict)
assert "text/plain" in mime_types
assert "text/csv" in mime_types
assert "application/csv" in mime_types
@pytest.mark.parametrize(
("mime_type", "expected"),
[
("text/plain", 10),
("text/csv", 10),
("application/csv", 10),
("application/pdf", None),
("image/png", None),
],
)
def test_score(self, mime_type: str, expected: int | None) -> None:
assert TextDocumentParser.score(mime_type, "file.txt") == expected
def test_can_produce_archive_is_false(
self,
text_parser: TextDocumentParser,
) -> None:
assert text_parser.can_produce_archive is False
def test_requires_pdf_rendition_is_false(
self,
text_parser: TextDocumentParser,
) -> None:
assert text_parser.requires_pdf_rendition is False
class TestTextParserLifecycle:
"""Verify context-manager behaviour and temporary directory cleanup."""
def test_context_manager_cleans_up_tempdir(self) -> None:
with TextDocumentParser() as parser:
tempdir = parser._tempdir
assert tempdir.exists()
assert not tempdir.exists()
def test_context_manager_cleans_up_after_exception(self) -> None:
tempdir: Path | None = None
with pytest.raises(RuntimeError):
with TextDocumentParser() as parser:
tempdir = parser._tempdir
raise RuntimeError("boom")
assert tempdir is not None
assert not tempdir.exists()
class TestTextParserParse:
"""Verify parse() and the result accessors."""
def test_parse_valid_utf8(
self,
text_parser: TextDocumentParser,
sample_txt_file: Path,
) -> None:
text_parser.parse(sample_txt_file, "text/plain")
assert text_parser.get_text() == "This is a test file.\n"
def test_parse_returns_none_for_archive_path(
self,
text_parser: TextDocumentParser,
sample_txt_file: Path,
) -> None:
text_parser.parse(sample_txt_file, "text/plain")
assert text_parser.get_archive_path() is None
def test_parse_returns_none_for_date(
self,
text_parser: TextDocumentParser,
sample_txt_file: Path,
) -> None:
text_parser.parse(sample_txt_file, "text/plain")
assert text_parser.get_date() is None
def test_parse_invalid_utf8_bytes_replaced(
self,
text_parser: TextDocumentParser,
malformed_txt_file: Path,
) -> None:
"""
GIVEN:
- A text file containing invalid UTF-8 byte sequences
WHEN:
- The file is parsed
THEN:
- Parsing succeeds
- Invalid bytes are replaced with the Unicode replacement character
"""
text_parser.parse(malformed_txt_file, "text/plain")
assert text_parser.get_text() == "Pantothens\ufffdure\n"
def test_get_text_none_before_parse(
self,
text_parser: TextDocumentParser,
) -> None:
assert text_parser.get_text() is None
class TestTextParserThumbnail:
"""Verify thumbnail generation."""
def test_thumbnail_exists_and_is_file(
self,
text_parser: TextDocumentParser,
sample_txt_file: Path,
) -> None:
thumb = text_parser.get_thumbnail(sample_txt_file, "text/plain")
assert thumb.exists()
assert thumb.is_file()
def test_thumbnail_large_file_does_not_read_all(
self,
text_parser: TextDocumentParser,
) -> None:
"""
GIVEN:
- A text file larger than 50 MB
WHEN:
- A thumbnail is requested
THEN:
- The thumbnail is generated without loading the full file
"""
with tempfile.NamedTemporaryFile(
delete=False,
mode="w",
encoding="utf-8",
suffix=".txt",
) as tmp:
tmp.write("A" * (51 * 1024 * 1024))
large_file = Path(tmp.name)
try:
thumb = text_parser.get_thumbnail(large_file, "text/plain")
assert thumb.exists()
assert thumb.is_file()
finally:
large_file.unlink(missing_ok=True)
def test_get_page_count_returns_none(
self,
text_parser: TextDocumentParser,
sample_txt_file: Path,
) -> None:
assert text_parser.get_page_count(sample_txt_file, "text/plain") is None
class TestTextParserMetadata:
"""Verify extract_metadata behaviour."""
def test_extract_metadata_returns_empty_list(
self,
text_parser: TextDocumentParser,
sample_txt_file: Path,
) -> None:
result = text_parser.extract_metadata(sample_txt_file, "text/plain")
assert result == []
def test_extract_metadata_returns_list_type(
self,
text_parser: TextDocumentParser,
sample_txt_file: Path,
) -> None:
result = text_parser.extract_metadata(sample_txt_file, "text/plain")
assert isinstance(result, list)
def test_extract_metadata_ignores_mime_type(
self,
text_parser: TextDocumentParser,
sample_txt_file: Path,
) -> None:
"""extract_metadata returns [] regardless of the mime_type argument."""
assert text_parser.extract_metadata(sample_txt_file, "application/pdf") == []
assert text_parser.extract_metadata(sample_txt_file, "text/csv") == []
class TestTextParserRegistry:
"""Verify that TextDocumentParser is registered by default."""
def test_registered_in_defaults(self) -> None:
from paperless.parsers.registry import ParserRegistry
registry = ParserRegistry()
registry.register_defaults()
assert TextDocumentParser in registry._builtins
def test_get_parser_for_text_plain(self) -> None:
from paperless.parsers.registry import get_parser_registry
registry = get_parser_registry()
parser_cls = registry.get_parser_for_file("text/plain", "doc.txt")
assert parser_cls is TextDocumentParser
def test_get_parser_for_text_csv(self) -> None:
from paperless.parsers.registry import get_parser_registry
registry = get_parser_registry()
parser_cls = registry.get_parser_for_file("text/csv", "data.csv")
assert parser_cls is TextDocumentParser
def test_get_parser_for_unknown_type_returns_none(self) -> None:
from paperless.parsers.registry import get_parser_registry
registry = get_parser_registry()
parser_cls = registry.get_parser_for_file("application/pdf", "doc.pdf")
assert parser_cls is None

View File

@@ -581,11 +581,11 @@ class TestV3MinimumUpgradeVersionCheck:
) -> None:
"""
GIVEN:
- DB is on an old v2 version (pre-v2.20.9)
- DB is on an old v2 version (pre-v2.20.10)
WHEN:
- The v3 upgrade check runs
THEN:
- The error hint explicitly references v2.20.9 so users know what to do
- The error hint explicitly references v2.20.10 so users know what to do
"""
mocker.patch.dict(
"paperless.checks.connections",
@@ -593,7 +593,7 @@ class TestV3MinimumUpgradeVersionCheck:
)
result = check_v3_minimum_upgrade_version(None)
assert len(result) == 1
assert "v2.20.9" in result[0].hint
assert "v2.20.10" in result[0].hint
def test_db_error_is_swallowed(self, mocker: MockerFixture) -> None:
"""

View File

@@ -1,714 +0,0 @@
"""
Tests for :mod:`paperless.parsers` (ParserProtocol) and
:mod:`paperless.parsers.registry` (ParserRegistry + module-level helpers).
All tests use pytest-style functions/classes — no unittest.TestCase.
The ``clean_registry`` fixture ensures complete isolation between tests by
resetting the module-level singleton before and after every test.
"""
from __future__ import annotations
import logging
from importlib.metadata import EntryPoint
from pathlib import Path
from typing import Self
from unittest.mock import MagicMock
from unittest.mock import patch
import pytest
from paperless.parsers import ParserProtocol
from paperless.parsers.registry import ParserRegistry
from paperless.parsers.registry import get_parser_registry
from paperless.parsers.registry import init_builtin_parsers
from paperless.parsers.registry import reset_parser_registry
@pytest.fixture()
def dummy_parser_cls() -> type:
"""Return a class that fully satisfies :class:`ParserProtocol`.
GIVEN: A need to exercise registry and Protocol logic with a minimal
but complete parser.
WHEN: A test requests this fixture.
THEN: A class with all required attributes and methods is returned.
"""
class DummyParser:
name = "dummy-parser"
version = "0.1.0"
author = "Test Author"
url = "https://example.com/dummy-parser"
@classmethod
def supported_mime_types(cls) -> dict[str, str]:
return {"text/plain": ".txt"}
@classmethod
def score(
cls,
mime_type: str,
filename: str,
path: Path | None = None,
) -> int | None:
return 10
@property
def can_produce_archive(self) -> bool:
return False
@property
def requires_pdf_rendition(self) -> bool:
return False
def parse(
self,
document_path: Path,
mime_type: str,
*,
produce_archive: bool = True,
) -> None:
"""
Required to exist, but doesn't need to do anything
"""
def get_text(self) -> str | None:
return None
def get_date(self) -> None:
return None
def get_archive_path(self) -> Path | None:
return None
def get_thumbnail(
self,
document_path: Path,
mime_type: str,
) -> Path:
return Path("/tmp/thumbnail.webp")
def get_page_count(
self,
document_path: Path,
mime_type: str,
) -> int | None:
return None
def extract_metadata(
self,
document_path: Path,
mime_type: str,
) -> list:
return []
def __enter__(self) -> Self:
return self
def __exit__(self, exc_type, exc_val, exc_tb) -> None:
"""
Required to exist, but doesn't need to do anything
"""
return DummyParser
class TestParserProtocol:
"""Verify runtime isinstance() checks against ParserProtocol."""
def test_compliant_class_instance_passes_isinstance(
self,
dummy_parser_cls: type,
) -> None:
"""
GIVEN: A class that implements every method required by ParserProtocol.
WHEN: isinstance() is called with the Protocol.
THEN: The check passes (returns True).
"""
instance = dummy_parser_cls()
assert isinstance(instance, ParserProtocol)
def test_non_compliant_class_instance_fails_isinstance(self) -> None:
"""
GIVEN: A plain class with no parser-related methods.
WHEN: isinstance() is called with ParserProtocol.
THEN: The check fails (returns False).
"""
class Unrelated:
pass
assert not isinstance(Unrelated(), ParserProtocol)
@pytest.mark.parametrize(
"missing_method",
[
pytest.param("parse", id="missing-parse"),
pytest.param("get_text", id="missing-get_text"),
pytest.param("get_thumbnail", id="missing-get_thumbnail"),
pytest.param("__enter__", id="missing-__enter__"),
pytest.param("__exit__", id="missing-__exit__"),
],
)
def test_partial_compliant_fails_isinstance(
self,
dummy_parser_cls: type,
missing_method: str,
) -> None:
"""
GIVEN: A class that satisfies ParserProtocol except for one method.
WHEN: isinstance() is called with ParserProtocol.
THEN: The check fails because the Protocol is not fully satisfied.
"""
# Create a subclass and delete the specified method to break compliance.
partial_cls = type(
"PartialParser",
(dummy_parser_cls,),
{missing_method: None}, # Replace with None — not callable
)
assert not isinstance(partial_cls(), ParserProtocol)
class TestRegistrySingleton:
"""Verify the module-level singleton lifecycle functions."""
def test_get_parser_registry_returns_instance(self) -> None:
"""
GIVEN: No registry has been created yet.
WHEN: get_parser_registry() is called.
THEN: A ParserRegistry instance is returned.
"""
registry = get_parser_registry()
assert isinstance(registry, ParserRegistry)
def test_get_parser_registry_same_instance_on_repeated_calls(self) -> None:
"""
GIVEN: A registry instance was created by a prior call.
WHEN: get_parser_registry() is called a second time.
THEN: The exact same object (identity) is returned.
"""
first = get_parser_registry()
second = get_parser_registry()
assert first is second
def test_reset_parser_registry_gives_fresh_instance(self) -> None:
"""
GIVEN: A registry instance already exists.
WHEN: reset_parser_registry() is called and then get_parser_registry()
is called again.
THEN: A new, distinct registry instance is returned.
"""
first = get_parser_registry()
reset_parser_registry()
second = get_parser_registry()
assert first is not second
def test_init_builtin_parsers_does_not_run_discover(
self,
monkeypatch: pytest.MonkeyPatch,
) -> None:
"""
GIVEN: discover() would raise an exception if called.
WHEN: init_builtin_parsers() is called.
THEN: No exception is raised, confirming discover() was not invoked.
"""
def exploding_discover(self) -> None:
raise RuntimeError(
"discover() must not be called from init_builtin_parsers",
)
monkeypatch.setattr(ParserRegistry, "discover", exploding_discover)
# Should complete without raising.
init_builtin_parsers()
def test_init_builtin_parsers_idempotent(self) -> None:
"""
GIVEN: init_builtin_parsers() has already been called once.
WHEN: init_builtin_parsers() is called a second time.
THEN: No error is raised and the same registry instance is reused.
"""
init_builtin_parsers()
# Capture the registry created by the first call.
import paperless.parsers.registry as reg_module
first_registry = reg_module._registry
init_builtin_parsers()
assert reg_module._registry is first_registry
class TestParserRegistryGetParserForFile:
"""Verify parser selection logic in get_parser_for_file()."""
def test_returns_none_when_no_parsers_registered(self) -> None:
"""
GIVEN: A registry with no parsers registered.
WHEN: get_parser_for_file() is called for any MIME type.
THEN: None is returned.
"""
registry = ParserRegistry()
result = registry.get_parser_for_file("text/plain", "doc.txt")
assert result is None
def test_returns_none_for_unsupported_mime_type(
self,
dummy_parser_cls: type,
) -> None:
"""
GIVEN: A registry with a parser that supports only 'text/plain'.
WHEN: get_parser_for_file() is called with 'application/pdf'.
THEN: None is returned.
"""
registry = ParserRegistry()
registry.register_builtin(dummy_parser_cls)
result = registry.get_parser_for_file("application/pdf", "file.pdf")
assert result is None
def test_returns_parser_for_supported_mime_type(
self,
dummy_parser_cls: type,
) -> None:
"""
GIVEN: A registry with a parser registered for 'text/plain'.
WHEN: get_parser_for_file() is called with 'text/plain'.
THEN: The registered parser class is returned.
"""
registry = ParserRegistry()
registry.register_builtin(dummy_parser_cls)
result = registry.get_parser_for_file("text/plain", "readme.txt")
assert result is dummy_parser_cls
def test_highest_score_wins(self) -> None:
"""
GIVEN: Two parsers both supporting 'text/plain' with scores 5 and 20.
WHEN: get_parser_for_file() is called for 'text/plain'.
THEN: The parser with score 20 is returned.
"""
class LowScoreParser:
name = "low"
version = "1.0"
author = "A"
url = "https://example.com/low"
@classmethod
def supported_mime_types(cls):
return {"text/plain": ".txt"}
@classmethod
def score(cls, mime_type, filename, path=None):
return 5
class HighScoreParser:
name = "high"
version = "1.0"
author = "B"
url = "https://example.com/high"
@classmethod
def supported_mime_types(cls):
return {"text/plain": ".txt"}
@classmethod
def score(cls, mime_type, filename, path=None):
return 20
registry = ParserRegistry()
registry.register_builtin(LowScoreParser)
registry.register_builtin(HighScoreParser)
result = registry.get_parser_for_file("text/plain", "readme.txt")
assert result is HighScoreParser
def test_parser_returning_none_score_is_skipped(self) -> None:
"""
GIVEN: A parser that returns None from score() for the given file.
WHEN: get_parser_for_file() is called.
THEN: That parser is skipped and None is returned (no other candidates).
"""
class DecliningParser:
name = "declining"
version = "1.0"
author = "A"
url = "https://example.com"
@classmethod
def supported_mime_types(cls):
return {"text/plain": ".txt"}
@classmethod
def score(cls, mime_type, filename, path=None):
return None # Explicitly declines
registry = ParserRegistry()
registry.register_builtin(DecliningParser)
result = registry.get_parser_for_file("text/plain", "readme.txt")
assert result is None
def test_all_parsers_decline_returns_none(self) -> None:
"""
GIVEN: Multiple parsers that all return None from score().
WHEN: get_parser_for_file() is called.
THEN: None is returned.
"""
class AlwaysDeclines:
name = "declines"
version = "1.0"
author = "A"
url = "https://example.com"
@classmethod
def supported_mime_types(cls):
return {"text/plain": ".txt"}
@classmethod
def score(cls, mime_type, filename, path=None):
return None
registry = ParserRegistry()
registry.register_builtin(AlwaysDeclines)
registry._external.append(AlwaysDeclines)
result = registry.get_parser_for_file("text/plain", "file.txt")
assert result is None
def test_external_parser_beats_builtin_same_score(self) -> None:
"""
GIVEN: An external and a built-in parser both returning score 10.
WHEN: get_parser_for_file() is called.
THEN: The external parser wins because externals are evaluated first
and the first-seen-wins policy applies at equal scores.
"""
class BuiltinParser:
name = "builtin"
version = "1.0"
author = "Core"
url = "https://example.com/builtin"
@classmethod
def supported_mime_types(cls):
return {"text/plain": ".txt"}
@classmethod
def score(cls, mime_type, filename, path=None):
return 10
class ExternalParser:
name = "external"
version = "2.0"
author = "Third Party"
url = "https://example.com/external"
@classmethod
def supported_mime_types(cls):
return {"text/plain": ".txt"}
@classmethod
def score(cls, mime_type, filename, path=None):
return 10
registry = ParserRegistry()
registry.register_builtin(BuiltinParser)
registry._external.append(ExternalParser)
result = registry.get_parser_for_file("text/plain", "file.txt")
assert result is ExternalParser
def test_builtin_wins_when_external_declines(self) -> None:
"""
GIVEN: An external parser that declines (score None) and a built-in
that returns score 5.
WHEN: get_parser_for_file() is called.
THEN: The built-in parser is returned.
"""
class DecliningExternal:
name = "declining-external"
version = "1.0"
author = "Third Party"
url = "https://example.com/declining"
@classmethod
def supported_mime_types(cls):
return {"text/plain": ".txt"}
@classmethod
def score(cls, mime_type, filename, path=None):
return None
class AcceptingBuiltin:
name = "accepting-builtin"
version = "1.0"
author = "Core"
url = "https://example.com/accepting"
@classmethod
def supported_mime_types(cls):
return {"text/plain": ".txt"}
@classmethod
def score(cls, mime_type, filename, path=None):
return 5
registry = ParserRegistry()
registry.register_builtin(AcceptingBuiltin)
registry._external.append(DecliningExternal)
result = registry.get_parser_for_file("text/plain", "file.txt")
assert result is AcceptingBuiltin
class TestDiscover:
"""Verify entrypoint discovery in ParserRegistry.discover()."""
def test_discover_with_no_entrypoints(self) -> None:
"""
GIVEN: No entrypoints are registered under 'paperless_ngx.parsers'.
WHEN: discover() is called.
THEN: _external remains empty and no errors are raised.
"""
registry = ParserRegistry()
with patch(
"paperless.parsers.registry.entry_points",
return_value=[],
):
registry.discover()
assert registry._external == []
def test_discover_adds_valid_external_parser(self) -> None:
"""
GIVEN: One valid entrypoint whose loaded class has all required attrs.
WHEN: discover() is called.
THEN: The class is appended to _external.
"""
class ValidExternal:
name = "valid-external"
version = "3.0.0"
author = "Someone"
url = "https://example.com/valid"
@classmethod
def supported_mime_types(cls):
return {"application/pdf": ".pdf"}
@classmethod
def score(cls, mime_type, filename, path=None):
return 5
mock_ep = MagicMock(spec=EntryPoint)
mock_ep.name = "valid_external"
mock_ep.load.return_value = ValidExternal
registry = ParserRegistry()
with patch(
"paperless.parsers.registry.entry_points",
return_value=[mock_ep],
):
registry.discover()
assert ValidExternal in registry._external
def test_discover_skips_entrypoint_with_load_error(
self,
caplog: pytest.LogCaptureFixture,
) -> None:
"""
GIVEN: An entrypoint whose load() method raises ImportError.
WHEN: discover() is called.
THEN: The entrypoint is skipped, an error is logged, and _external
remains empty.
"""
mock_ep = MagicMock(spec=EntryPoint)
mock_ep.name = "broken_ep"
mock_ep.load.side_effect = ImportError("missing dependency")
registry = ParserRegistry()
with caplog.at_level(logging.ERROR, logger="paperless.parsers.registry"):
with patch(
"paperless.parsers.registry.entry_points",
return_value=[mock_ep],
):
registry.discover()
assert registry._external == []
assert any(
"broken_ep" in record.message
for record in caplog.records
if record.levelno >= logging.ERROR
)
def test_discover_skips_entrypoint_with_missing_attrs(
self,
caplog: pytest.LogCaptureFixture,
) -> None:
"""
GIVEN: A class loaded from an entrypoint that is missing the 'score'
attribute.
WHEN: discover() is called.
THEN: The entrypoint is skipped, a warning is logged, and _external
remains empty.
"""
class MissingScore:
name = "missing-score"
version = "1.0"
author = "Someone"
url = "https://example.com"
# 'score' classmethod is intentionally absent.
@classmethod
def supported_mime_types(cls):
return {"text/plain": ".txt"}
mock_ep = MagicMock(spec=EntryPoint)
mock_ep.name = "missing_score_ep"
mock_ep.load.return_value = MissingScore
registry = ParserRegistry()
with caplog.at_level(logging.WARNING, logger="paperless.parsers.registry"):
with patch(
"paperless.parsers.registry.entry_points",
return_value=[mock_ep],
):
registry.discover()
assert registry._external == []
assert any(
"missing_score_ep" in record.message
for record in caplog.records
if record.levelno >= logging.WARNING
)
def test_discover_logs_loaded_parser_info(
self,
caplog: pytest.LogCaptureFixture,
) -> None:
"""
GIVEN: A valid entrypoint that loads successfully.
WHEN: discover() is called.
THEN: An INFO log message is emitted containing the parser name,
version, author, and entrypoint name.
"""
class LoggableParser:
name = "loggable"
version = "4.2.0"
author = "Log Tester"
url = "https://example.com/loggable"
@classmethod
def supported_mime_types(cls):
return {"image/png": ".png"}
@classmethod
def score(cls, mime_type, filename, path=None):
return 1
mock_ep = MagicMock(spec=EntryPoint)
mock_ep.name = "loggable_ep"
mock_ep.load.return_value = LoggableParser
registry = ParserRegistry()
with caplog.at_level(logging.INFO, logger="paperless.parsers.registry"):
with patch(
"paperless.parsers.registry.entry_points",
return_value=[mock_ep],
):
registry.discover()
info_messages = " ".join(
r.message for r in caplog.records if r.levelno == logging.INFO
)
assert "loggable" in info_messages
assert "4.2.0" in info_messages
assert "Log Tester" in info_messages
assert "loggable_ep" in info_messages
class TestLogSummary:
"""Verify log output from ParserRegistry.log_summary()."""
def test_log_summary_with_no_external_parsers(
self,
dummy_parser_cls: type,
caplog: pytest.LogCaptureFixture,
) -> None:
"""
GIVEN: A registry with one built-in parser and no external parsers.
WHEN: log_summary() is called.
THEN: The built-in parser name appears in the logs.
"""
registry = ParserRegistry()
registry.register_builtin(dummy_parser_cls)
with caplog.at_level(logging.INFO, logger="paperless.parsers.registry"):
registry.log_summary()
all_messages = " ".join(r.message for r in caplog.records)
assert dummy_parser_cls.name in all_messages
def test_log_summary_with_external_parsers(
self,
caplog: pytest.LogCaptureFixture,
) -> None:
"""
GIVEN: A registry with one external parser registered.
WHEN: log_summary() is called.
THEN: The external parser name, version, author, and url appear in
the log output.
"""
class ExtParser:
name = "ext-parser"
version = "9.9.9"
author = "Ext Corp"
url = "https://ext.example.com"
@classmethod
def supported_mime_types(cls):
return {}
@classmethod
def score(cls, mime_type, filename, path=None):
return None
registry = ParserRegistry()
registry._external.append(ExtParser)
with caplog.at_level(logging.INFO, logger="paperless.parsers.registry"):
registry.log_summary()
all_messages = " ".join(r.message for r in caplog.records)
assert "ext-parser" in all_messages
assert "9.9.9" in all_messages
assert "Ext Corp" in all_messages
assert "https://ext.example.com" in all_messages
def test_log_summary_logs_no_third_party_message_when_none(
self,
caplog: pytest.LogCaptureFixture,
) -> None:
"""
GIVEN: A registry with no external parsers.
WHEN: log_summary() is called.
THEN: A message containing 'No third-party parsers discovered.' is
logged.
"""
registry = ParserRegistry()
with caplog.at_level(logging.INFO, logger="paperless.parsers.registry"):
registry.log_summary()
all_messages = " ".join(r.message for r in caplog.records)
assert "No third-party parsers discovered." in all_messages

View File

@@ -21,12 +21,18 @@ from documents.views import BulkEditView
from documents.views import ChatStreamingView
from documents.views import CorrespondentViewSet
from documents.views import CustomFieldViewSet
from documents.views import DeleteDocumentsView
from documents.views import DocumentTypeViewSet
from documents.views import EditPdfDocumentsView
from documents.views import GlobalSearchView
from documents.views import IndexView
from documents.views import LogViewSet
from documents.views import MergeDocumentsView
from documents.views import PostDocumentView
from documents.views import RemoteVersionView
from documents.views import RemovePasswordDocumentsView
from documents.views import ReprocessDocumentsView
from documents.views import RotateDocumentsView
from documents.views import SavedViewViewSet
from documents.views import SearchAutoCompleteView
from documents.views import SelectionDataView
@@ -132,6 +138,36 @@ urlpatterns = [
BulkEditView.as_view(),
name="bulk_edit",
),
re_path(
"^delete/",
DeleteDocumentsView.as_view(),
name="delete_documents",
),
re_path(
"^reprocess/",
ReprocessDocumentsView.as_view(),
name="reprocess_documents",
),
re_path(
"^rotate/",
RotateDocumentsView.as_view(),
name="rotate_documents",
),
re_path(
"^merge/",
MergeDocumentsView.as_view(),
name="merge_documents",
),
re_path(
"^edit_pdf/",
EditPdfDocumentsView.as_view(),
name="edit_pdf_documents",
),
re_path(
"^remove_password/",
RemovePasswordDocumentsView.as_view(),
name="remove_password_documents",
),
re_path(
"^bulk_download/",
BulkDownloadView.as_view(),

View File

@@ -7,6 +7,7 @@ if TYPE_CHECKING:
from llama_index.llms.openai import OpenAI
from paperless.config import AIConfig
from paperless.network import validate_outbound_http_url
from paperless_ai.base_model import DocumentClassifierSchema
logger = logging.getLogger("paperless_ai.client")
@@ -25,17 +26,28 @@ class AIClient:
if self.settings.llm_backend == "ollama":
from llama_index.llms.ollama import Ollama
endpoint = self.settings.llm_endpoint or "http://localhost:11434"
validate_outbound_http_url(
endpoint,
allow_internal=self.settings.llm_allow_internal_endpoints,
)
return Ollama(
model=self.settings.llm_model or "llama3.1",
base_url=self.settings.llm_endpoint or "http://localhost:11434",
base_url=endpoint,
request_timeout=120,
)
elif self.settings.llm_backend == "openai":
from llama_index.llms.openai import OpenAI
endpoint = self.settings.llm_endpoint or None
if endpoint:
validate_outbound_http_url(
endpoint,
allow_internal=self.settings.llm_allow_internal_endpoints,
)
return OpenAI(
model=self.settings.llm_model or "gpt-3.5-turbo",
api_base=self.settings.llm_endpoint or None,
api_base=endpoint,
api_key=self.settings.llm_api_key,
)
else:

View File

@@ -12,6 +12,7 @@ from documents.models import Document
from documents.models import Note
from paperless.config import AIConfig
from paperless.models import LLMEmbeddingBackend
from paperless.network import validate_outbound_http_url
def get_embedding_model() -> "BaseEmbedding":
@@ -21,10 +22,16 @@ def get_embedding_model() -> "BaseEmbedding":
case LLMEmbeddingBackend.OPENAI:
from llama_index.embeddings.openai import OpenAIEmbedding
endpoint = config.llm_endpoint or None
if endpoint:
validate_outbound_http_url(
endpoint,
allow_internal=config.llm_allow_internal_endpoints,
)
return OpenAIEmbedding(
model=config.llm_embedding_model or "text-embedding-3-small",
api_key=config.llm_api_key,
api_base=config.llm_endpoint or None,
api_base=endpoint,
)
case LLMEmbeddingBackend.HUGGINGFACE:
from llama_index.embeddings.huggingface import HuggingFaceEmbedding

View File

@@ -12,6 +12,7 @@ from paperless_ai.client import AIClient
def mock_ai_config():
with patch("paperless_ai.client.AIConfig") as MockAIConfig:
mock_config = MagicMock()
mock_config.llm_allow_internal_endpoints = True
MockAIConfig.return_value = mock_config
yield mock_config
@@ -59,6 +60,17 @@ def test_get_llm_openai(mock_ai_config, mock_openai_llm):
assert client.llm == mock_openai_llm.return_value
def test_get_llm_openai_blocks_internal_endpoint_when_disallowed(mock_ai_config):
mock_ai_config.llm_backend = "openai"
mock_ai_config.llm_model = "test_model"
mock_ai_config.llm_api_key = "test_api_key"
mock_ai_config.llm_endpoint = "http://127.0.0.1:1234"
mock_ai_config.llm_allow_internal_endpoints = False
with pytest.raises(ValueError, match="non-public address"):
AIClient()
def test_get_llm_unsupported_backend(mock_ai_config):
mock_ai_config.llm_backend = "unsupported"

View File

@@ -15,6 +15,7 @@ from paperless_ai.embedding import get_embedding_model
@pytest.fixture
def mock_ai_config():
with patch("paperless_ai.embedding.AIConfig") as MockAIConfig:
MockAIConfig.return_value.llm_allow_internal_endpoints = True
yield MockAIConfig
@@ -77,6 +78,19 @@ def test_get_embedding_model_openai(mock_ai_config):
assert model == MockOpenAIEmbedding.return_value
def test_get_embedding_model_openai_blocks_internal_endpoint_when_disallowed(
mock_ai_config,
):
mock_ai_config.return_value.llm_embedding_backend = LLMEmbeddingBackend.OPENAI
mock_ai_config.return_value.llm_embedding_model = "text-embedding-3-small"
mock_ai_config.return_value.llm_api_key = "test_api_key"
mock_ai_config.return_value.llm_endpoint = "http://127.0.0.1:11434"
mock_ai_config.return_value.llm_allow_internal_endpoints = False
with pytest.raises(ValueError, match="non-public address"):
get_embedding_model()
def test_get_embedding_model_huggingface(mock_ai_config):
mock_ai_config.return_value.llm_embedding_backend = LLMEmbeddingBackend.HUGGINGFACE
mock_ai_config.return_value.llm_embedding_model = (

View File

@@ -0,0 +1,50 @@
from pathlib import Path
from django.conf import settings
from PIL import Image
from PIL import ImageDraw
from PIL import ImageFont
from documents.parsers import DocumentParser
class TextDocumentParser(DocumentParser):
"""
This parser directly parses a text document (.txt, .md, or .csv)
"""
logging_name = "paperless.parsing.text"
def get_thumbnail(self, document_path: Path, mime_type, file_name=None) -> Path:
# Avoid reading entire file into memory
max_chars = 100_000
file_size_limit = 50 * 1024 * 1024
if document_path.stat().st_size > file_size_limit:
text = "[File too large to preview]"
else:
with Path(document_path).open("r", encoding="utf-8", errors="replace") as f:
text = f.read(max_chars)
img = Image.new("RGB", (500, 700), color="white")
draw = ImageDraw.Draw(img)
font = ImageFont.truetype(
font=settings.THUMBNAIL_FONT_NAME,
size=20,
layout_engine=ImageFont.Layout.BASIC,
)
draw.multiline_text((5, 5), text, font=font, fill="black", spacing=4)
out_path = self.tempdir / "thumb.webp"
img.save(out_path, format="WEBP")
return out_path
def parse(self, document_path, mime_type, file_name=None) -> None:
self.text = self.read_file_handle_unicode_errors(document_path)
def get_settings(self) -> None:
"""
This parser does not implement additional settings yet
"""
return None

View File

@@ -1,13 +1,7 @@
def get_parser(*args, **kwargs):
from paperless.parsers.text import TextDocumentParser
from paperless_text.parsers import TextDocumentParser
# The new TextDocumentParser does not accept the legacy logging_group /
# progress_callback kwargs injected by the old signal-based consumer.
# These are dropped here; Phase 4 will replace this signal path with the
# new ParserRegistry so the shim can be removed at that point.
kwargs.pop("logging_group", None)
kwargs.pop("progress_callback", None)
return TextDocumentParser()
return TextDocumentParser(*args, **kwargs)
def text_consumer_declaration(sender, **kwargs):

View File

@@ -0,0 +1,30 @@
from collections.abc import Generator
from pathlib import Path
import pytest
from paperless_text.parsers import TextDocumentParser
@pytest.fixture(scope="session")
def sample_dir() -> Path:
return (Path(__file__).parent / Path("samples")).resolve()
@pytest.fixture()
def text_parser() -> Generator[TextDocumentParser, None, None]:
try:
parser = TextDocumentParser(logging_group=None)
yield parser
finally:
parser.cleanup()
@pytest.fixture(scope="session")
def sample_txt_file(sample_dir: Path) -> Path:
return sample_dir / "test.txt"
@pytest.fixture(scope="session")
def malformed_txt_file(sample_dir: Path) -> Path:
return sample_dir / "decode_error.txt"

View File

@@ -0,0 +1,69 @@
import tempfile
from pathlib import Path
from paperless_text.parsers import TextDocumentParser
class TestTextParser:
def test_thumbnail(
self,
text_parser: TextDocumentParser,
sample_txt_file: Path,
) -> None:
# just make sure that it does not crash
f = text_parser.get_thumbnail(sample_txt_file, "text/plain")
assert f.exists()
assert f.is_file()
def test_parse(
self,
text_parser: TextDocumentParser,
sample_txt_file: Path,
) -> None:
text_parser.parse(sample_txt_file, "text/plain")
assert text_parser.get_text() == "This is a test file.\n"
assert text_parser.get_archive_path() is None
def test_parse_invalid_bytes(
self,
text_parser: TextDocumentParser,
malformed_txt_file: Path,
) -> None:
"""
GIVEN:
- Text file which contains invalid UTF bytes
WHEN:
- The file is parsed
THEN:
- Parsing continues
- Invalid bytes are removed
"""
text_parser.parse(malformed_txt_file, "text/plain")
assert text_parser.get_text() == "Pantothens<EFBFBD>ure\n"
assert text_parser.get_archive_path() is None
def test_thumbnail_large_file(self, text_parser: TextDocumentParser) -> None:
"""
GIVEN:
- A very large text file (>50MB)
WHEN:
- A thumbnail is requested
THEN:
- A thumbnail is created without reading the entire file into memory
"""
with tempfile.NamedTemporaryFile(
delete=False,
mode="w",
encoding="utf-8",
suffix=".txt",
) as tmp:
tmp.write("A" * (51 * 1024 * 1024)) # 51 MB of 'A'
large_file = Path(tmp.name)
thumb = text_parser.get_thumbnail(large_file, "text/plain")
assert thumb.exists()
assert thumb.is_file()
large_file.unlink()

View File

@@ -0,0 +1,136 @@
from pathlib import Path
import httpx
from django.conf import settings
from django.utils import timezone
from gotenberg_client import GotenbergClient
from gotenberg_client.options import PdfAFormat
from tika_client import TikaClient
from documents.parsers import DocumentParser
from documents.parsers import ParseError
from documents.parsers import make_thumbnail_from_pdf
from paperless.config import OutputTypeConfig
from paperless.models import OutputTypeChoices
class TikaDocumentParser(DocumentParser):
"""
This parser sends documents to a local tika server
"""
logging_name = "paperless.parsing.tika"
def get_thumbnail(self, document_path, mime_type, file_name=None):
if not self.archive_path:
self.archive_path = self.convert_to_pdf(document_path, file_name)
return make_thumbnail_from_pdf(
self.archive_path,
self.tempdir,
self.logging_group,
)
def extract_metadata(self, document_path, mime_type):
try:
with TikaClient(
tika_url=settings.TIKA_ENDPOINT,
timeout=settings.CELERY_TASK_TIME_LIMIT,
) as client:
parsed = client.metadata.from_file(document_path, mime_type)
return [
{
"namespace": "",
"prefix": "",
"key": key,
"value": parsed.data[key],
}
for key in parsed.data
]
except Exception as e:
self.log.warning(
f"Error while fetching document metadata for {document_path}: {e}",
)
return []
def parse(self, document_path: Path, mime_type: str, file_name=None) -> None:
self.log.info(f"Sending {document_path} to Tika server")
try:
with TikaClient(
tika_url=settings.TIKA_ENDPOINT,
timeout=settings.CELERY_TASK_TIME_LIMIT,
) as client:
try:
parsed = client.tika.as_text.from_file(document_path, mime_type)
except httpx.HTTPStatusError as err:
# Workaround https://issues.apache.org/jira/browse/TIKA-4110
# Tika fails with some files as multi-part form data
if err.response.status_code == httpx.codes.INTERNAL_SERVER_ERROR:
parsed = client.tika.as_text.from_buffer(
document_path.read_bytes(),
mime_type,
)
else: # pragma: no cover
raise
except Exception as err:
raise ParseError(
f"Could not parse {document_path} with tika server at "
f"{settings.TIKA_ENDPOINT}: {err}",
) from err
self.text = parsed.content
if self.text is not None:
self.text = self.text.strip()
self.date = parsed.created
if self.date is not None and timezone.is_naive(self.date):
self.date = timezone.make_aware(self.date)
self.archive_path = self.convert_to_pdf(document_path, file_name)
def convert_to_pdf(self, document_path: Path, file_name):
pdf_path = Path(self.tempdir) / "convert.pdf"
self.log.info(f"Converting {document_path} to PDF as {pdf_path}")
with (
GotenbergClient(
host=settings.TIKA_GOTENBERG_ENDPOINT,
timeout=settings.CELERY_TASK_TIME_LIMIT,
) as client,
client.libre_office.to_pdf() as route,
):
# Set the output format of the resulting PDF
if settings.OCR_OUTPUT_TYPE in {
OutputTypeChoices.PDF_A,
OutputTypeChoices.PDF_A2,
}:
route.pdf_format(PdfAFormat.A2b)
elif settings.OCR_OUTPUT_TYPE == OutputTypeChoices.PDF_A1:
self.log.warning(
"Gotenberg does not support PDF/A-1a, choosing PDF/A-2b instead",
)
route.pdf_format(PdfAFormat.A2b)
elif settings.OCR_OUTPUT_TYPE == OutputTypeChoices.PDF_A3:
route.pdf_format(PdfAFormat.A3b)
route.convert(document_path)
try:
response = route.run()
pdf_path.write_bytes(response.content)
return pdf_path
except Exception as err:
raise ParseError(
f"Error while converting document to PDF: {err}",
) from err
def get_settings(self) -> OutputTypeConfig:
"""
This parser only uses the PDF output type configuration currently
"""
return OutputTypeConfig()

View File

@@ -1,13 +1,7 @@
def get_parser(*args, **kwargs):
from paperless.parsers.tika import TikaDocumentParser
from paperless_tika.parsers import TikaDocumentParser
# The new TikaDocumentParser does not accept the legacy logging_group /
# progress_callback kwargs injected by the old signal-based consumer.
# These are dropped here; Phase 4 will replace this signal path with the
# new ParserRegistry so the shim can be removed at that point.
kwargs.pop("logging_group", None)
kwargs.pop("progress_callback", None)
return TikaDocumentParser()
return TikaDocumentParser(*args, **kwargs)
def tika_consumer_declaration(sender, **kwargs):

View File

@@ -0,0 +1,40 @@
from collections.abc import Generator
from pathlib import Path
import pytest
from paperless_tika.parsers import TikaDocumentParser
@pytest.fixture()
def tika_parser() -> Generator[TikaDocumentParser, None, None]:
try:
parser = TikaDocumentParser(logging_group=None)
yield parser
finally:
parser.cleanup()
@pytest.fixture(scope="session")
def sample_dir() -> Path:
return (Path(__file__).parent / Path("samples")).resolve()
@pytest.fixture(scope="session")
def sample_odt_file(sample_dir: Path) -> Path:
return sample_dir / "sample.odt"
@pytest.fixture(scope="session")
def sample_docx_file(sample_dir: Path) -> Path:
return sample_dir / "sample.docx"
@pytest.fixture(scope="session")
def sample_doc_file(sample_dir: Path) -> Path:
return sample_dir / "sample.doc"
@pytest.fixture(scope="session")
def sample_broken_odt(sample_dir: Path) -> Path:
return sample_dir / "multi-part-broken.odt"

Some files were not shown because too many files have changed in this diff Show More