Compare commits

...

446 Commits

Author SHA1 Message Date
Winfried Plappert
c5e09ae9b1 docs: expand restic find - documentation (#5675) 2026-02-19 18:07:05 +00:00
Winfried Plappert
1f329cd933 docs: expand documentation about testing (#5346) 2026-02-19 18:26:15 +01:00
Johannes Truschnigg
a8f0ad5cc4 mount: check for more requisite mountpoint conditions (#5718)
* mount: check for more requisite mountpoint conditions

In order to be able to mount a repository over a mountpoint target
directory via FUSE, that target directory needs to be both writeable and
executable for the UID performing the mount.

Without this patch, `restic mount` only checks for the target pathname's
existence, which can lead to a lot of data transfer and/or computation
for large repos to be performed before eventually croaking with a fatal
"fusermount: failed to chdir to mountpoint: Permission denied" (or
similar) error.

FUSE does allow for mounting over a target path that refers to a regular
(writeable) file, but the result is not accessible via chdir(), so we
prevent that as well, and accept only directory inodes as the intended
target mountpoint path.

* Don't use snake_case identifiers

* Add changelog entry

* tweak changelog summary

---------

Co-authored-by: Michael Eischer <michael.eischer@fau.de>
2026-02-19 17:11:49 +00:00
Michael Eischer
4c56384481 docs: describe assigning ambient capabilities using systemd (#5698)
---

Co-authored-by: udf2457 <udf2457@users.noreply.github.com>
2026-02-18 22:33:21 +01:00
Winfried Plappert
8b567a9270 Bugfix restic find: missing check for mtime --oldest/--newest (#5310) 2026-02-18 21:14:35 +00:00
Michael Eischer
27c560b371 Merge pull request #5650 from fabien-joubert/docs-warning-capabilities
docs: add warning for capability-based non-root backups
2026-02-18 21:44:04 +01:00
Andreas Scherbaum
66d915ef79 Add space in error message (#5704) 2026-02-18 20:27:02 +00:00
Michael Eischer
7077500a3b Have backup -vv mention compressed size of added files (#5669)
ui: mention compressed size of added files in `backup -vv`

This is already shown for modified files, but the added files message
wasn't updated when compression was implemented in restic.

Co-authored-by: Ilya Grigoriev <ilyagr@users.noreply.github.com>
2026-02-18 21:24:29 +01:00
Michael Eischer
6566f786e9 stats: also print snapshot size statistics in debug mode (#5712) 2026-02-18 21:21:40 +01:00
Michael Eischer
d1937a530b clarify pack ID in decryption error (#5710)
pack ID is included in full. In addition, the error message now says
that it is a pack file.
2026-02-18 20:43:10 +01:00
gunar
7101f11133 Fail fast for invalid RESTIC_PACK_SIZE env values (#5592)
Co-authored-by: Michael Eischer <michael.eischer@fau.de>
2026-02-01 15:45:31 +01:00
Michael Eischer
8bff5cead0 Merge pull request #5696 from restic/dependabot/go_modules/github.com/minio/minio-go/v7-7.0.98 2026-02-01 12:29:55 +01:00
Michael Eischer
5e43a44b15 Merge pull request #5680 from castilma/unlock-doc 2026-02-01 12:13:52 +01:00
Michael Eischer
67c13c643d Merge pull request #5691 from MichaelEischer/fix-rewriter-error 2026-02-01 12:09:52 +01:00
dependabot[bot]
b706c19614 build(deps): bump github.com/minio/minio-go/v7 from 7.0.97 to 7.0.98
Bumps [github.com/minio/minio-go/v7](https://github.com/minio/minio-go) from 7.0.97 to 7.0.98.
- [Release notes](https://github.com/minio/minio-go/releases)
- [Commits](https://github.com/minio/minio-go/compare/v7.0.97...v7.0.98)

---
updated-dependencies:
- dependency-name: github.com/minio/minio-go/v7
  dependency-version: 7.0.98
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-01 11:04:31 +00:00
Michael Eischer
da2ed89ffd Merge pull request #5697 from restic/dependabot/go_modules/github.com/klauspost/compress-1.18.3 2026-02-01 12:01:36 +01:00
Michael Eischer
cf3793bb41 Merge pull request #5695 from restic/dependabot/go_modules/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob-1.6.4 2026-02-01 12:00:39 +01:00
Michael Eischer
db8e379fd4 Merge pull request #5694 from restic/dependabot/go_modules/cloud.google.com/go/storage-1.59.2 2026-02-01 12:00:05 +01:00
Michael Eischer
4f73daa761 Merge pull request #5693 from restic/dependabot/go_modules/golang-x-deps-173d0ad829 2026-02-01 11:59:31 +01:00
Michael Eischer
48cfa908ed Merge pull request #5692 from restic/dependabot/github_actions/docker/login-action-3.7.0 2026-02-01 11:56:38 +01:00
Michael Eischer
d3c225627f Merge pull request #5682 from wplapper/docs_list 2026-02-01 11:55:37 +01:00
Michael Eischer
07d380d54b Merge pull request #5191 from wplapper/cmd_rewrite_include 2026-02-01 11:53:05 +01:00
Winfried Plappert
b544e71cac restic list doc - documenmtation
fixed wording for paragraph and inconsistent underlining
2026-02-01 06:12:56 +00:00
Winfried Plappert
099650f883 docs: restic list
corrected typo
2026-02-01 06:02:39 +00:00
Winfried Plappert
6154685c3a docs: add documentation for restic list
added a file doc/view_repository.rst which contains the
description of `restic list ...`
2026-02-01 06:02:39 +00:00
dependabot[bot]
66bb196591 build(deps): bump github.com/klauspost/compress from 1.18.2 to 1.18.3
Bumps [github.com/klauspost/compress](https://github.com/klauspost/compress) from 1.18.2 to 1.18.3.
- [Release notes](https://github.com/klauspost/compress/releases)
- [Commits](https://github.com/klauspost/compress/compare/v1.18.2...v1.18.3)

---
updated-dependencies:
- dependency-name: github.com/klauspost/compress
  dependency-version: 1.18.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-01 01:43:15 +00:00
dependabot[bot]
2be17d2313 build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
Bumps [github.com/Azure/azure-sdk-for-go/sdk/storage/azblob](https://github.com/Azure/azure-sdk-for-go) from 1.6.3 to 1.6.4.
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/storage/azblob/v1.6.3...sdk/storage/azblob/v1.6.4)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
  dependency-version: 1.6.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-01 01:42:57 +00:00
dependabot[bot]
34ba097162 build(deps): bump cloud.google.com/go/storage from 1.58.0 to 1.59.2
Bumps [cloud.google.com/go/storage](https://github.com/googleapis/google-cloud-go) from 1.58.0 to 1.59.2.
- [Release notes](https://github.com/googleapis/google-cloud-go/releases)
- [Changelog](https://github.com/googleapis/google-cloud-go/blob/main/CHANGES.md)
- [Commits](https://github.com/googleapis/google-cloud-go/compare/spanner/v1.58.0...storage/v1.59.2)

---
updated-dependencies:
- dependency-name: cloud.google.com/go/storage
  dependency-version: 1.59.2
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-01 01:42:48 +00:00
dependabot[bot]
38f1fb61f3 build(deps): bump the golang-x-deps group with 5 updates
Bumps the golang-x-deps group with 5 updates:

| Package | From | To |
| --- | --- | --- |
| [golang.org/x/crypto](https://github.com/golang/crypto) | `0.46.0` | `0.47.0` |
| [golang.org/x/net](https://github.com/golang/net) | `0.48.0` | `0.49.0` |
| [golang.org/x/sys](https://github.com/golang/sys) | `0.39.0` | `0.40.0` |
| [golang.org/x/term](https://github.com/golang/term) | `0.38.0` | `0.39.0` |
| [golang.org/x/text](https://github.com/golang/text) | `0.32.0` | `0.33.0` |


Updates `golang.org/x/crypto` from 0.46.0 to 0.47.0
- [Commits](https://github.com/golang/crypto/compare/v0.46.0...v0.47.0)

Updates `golang.org/x/net` from 0.48.0 to 0.49.0
- [Commits](https://github.com/golang/net/compare/v0.48.0...v0.49.0)

Updates `golang.org/x/sys` from 0.39.0 to 0.40.0
- [Commits](https://github.com/golang/sys/compare/v0.39.0...v0.40.0)

Updates `golang.org/x/term` from 0.38.0 to 0.39.0
- [Commits](https://github.com/golang/term/compare/v0.38.0...v0.39.0)

Updates `golang.org/x/text` from 0.32.0 to 0.33.0
- [Release notes](https://github.com/golang/text/releases)
- [Commits](https://github.com/golang/text/compare/v0.32.0...v0.33.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-version: 0.47.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-deps
- dependency-name: golang.org/x/net
  dependency-version: 0.49.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-deps
- dependency-name: golang.org/x/sys
  dependency-version: 0.40.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-deps
- dependency-name: golang.org/x/term
  dependency-version: 0.39.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-deps
- dependency-name: golang.org/x/text
  dependency-version: 0.33.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-deps
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-01 01:42:37 +00:00
dependabot[bot]
827c7bcae8 build(deps): bump docker/login-action from 3.6.0 to 3.7.0
Bumps [docker/login-action](https://github.com/docker/login-action) from 3.6.0 to 3.7.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](5e57cd1181...c94ce9fb46)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-version: 3.7.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-02-01 01:42:36 +00:00
Winfried Plappert
bcd4168428 Enhancement: calls to SnapshotFilter.FindLatest() can be simplified (#5688) 2026-01-31 23:04:01 +01:00
Michael Eischer
901235efc9 rewrite: skip snapshot parts not matchable by include patterns 2026-01-31 22:42:02 +01:00
Michael Eischer
ef1d525f22 rewriter: return correct error if tree iteration fails 2026-01-31 22:07:07 +01:00
Michael Eischer
74d60ad223 rewriter: test KeepEmptyDirectory option 2026-01-31 22:01:23 +01:00
Michael Eischer
0d71f70a22 minor cleanups and typos 2026-01-31 22:01:23 +01:00
Winfried Plappert
ee154ce0ab restic rewrite --include
added function Count() to the *TreeWriter methods
2026-01-31 19:58:29 +00:00
Winfried Plappert
b6af01bb28 restic rewrite integration test
convert exclusive lock on repository to 'no-lock'
2026-01-31 19:43:03 +00:00
Winfried Plappert
5148608c39 restic rewrite include - based on restic 0.18.1
cmd/restic/cmd_rewrite.go:
introduction of include filters for this command:
- add include filters, add error checking code
- add new parameter 'keepEmptyDirectoryFunc' to 'walker.NewSnapshotSizeRewriter()',
  so empty directories have to be kept to keep the directory structure intact
- add parameter 'keepEmptySnapshot' to 'filterAndReplaceSnapshot()' to keep snapshots
  intact when nothing is to be included
- introduce helper function 'gatherIncludeFilters()' and 'gatherExcludeFilters()' to
  keep code flow clean

cmd/restic/cmd_rewrite_integration_test.go:
add several new tests around the 'include' functionality

internal/filter/include.go:
this is where is include filter is defined

internal/walker/rewriter.go:
- struct RewriteOpts gains field 'KeepEmtpyDirectory', which is a 'NodeKeepEmptyDirectoryFunc()'
  which defaults to nil, so that al subdirectories are kept
- function 'NewSnapshotSizeRewriter()' gains the parameter 'keepEmptyDirecoryFilter' which
  controls the management of empty subdirectories in case of include filters active

internal/data/tree.go:
gains a function Count() for checking the number if node elements in a newly built tree

internal/walker/rewriter_test.go:
function 'NewSnapshotSizeRewriter()' gets an additional parameter nil to keeps things happy

cmd/restic/cmd_repair_snapshots.go:
function 'filterAndReplaceSnapshot()' gets an additional parameter 'keepEmptySnapshot=nil'

doc/045_working_with_repos.rst:
gets to mention include filters

changelog/unreleased/issue-4278:
the usual announcement file

git rebase master -i produced this

restic rewrite include - keep linter happy

cmd/restic/cmd_rewrite_integration_test.go:
linter likes strings.Contain() better than my strings.Index() >= 0
2026-01-31 19:42:56 +00:00
Michael Eischer
083cdf0675 Merge pull request #5613 from MichaelEischer/tree-node-iterator 2026-01-31 20:10:57 +01:00
Michael Eischer
ce7c144aac data: add support for unknown keys to treeIterator
While not planned, it's also not completely impossible that a tree node
might get additional top-level fields. As the tree iterator is built
with a strict expectation of the top-level fields, this would result in
a parsing error. Future-proof the code by simply skipping unknown
fields.
2026-01-31 20:03:38 +01:00
Michael Eischer
81948937ca data: test DualTreeIterator 2026-01-31 20:03:38 +01:00
Michael Eischer
fa8889eec4 data: test LoadTree+SaveTree cycle 2026-01-31 20:03:38 +01:00
Michael Eischer
6de64911fb data: test TreeFinder 2026-01-31 20:03:38 +01:00
Michael Eischer
17688c2313 data: move TestTreeMap to data package to allow reuse 2026-01-31 20:03:38 +01:00
Michael Eischer
e1a5550a27 test: use generics in Equal function signature
This simplifies comparing a typed value against nil. Previously it was
necessary to case nil into the proper type.
2026-01-31 20:03:38 +01:00
Michael Eischer
24d56fe2a6 diff: switch to efficient DualTreeIterator
The previous implementation stored the whole tree in a map and used it
for checking overlap between trees. This is now replaced with the
DualTreeIterator, which iterates over two trees in parallel and returns
the merge stream in order. In case of overlap between both trees, it
returns both nodes at the same time. Otherwise, only a single node is
returned.
2026-01-31 20:03:38 +01:00
Michael Eischer
350f29d921 data: replace Tree with TreeNodeIterator
The TreeNodeIterator decodes nodes while iterating over a tree blob.
This should reduce peak memory usage as now only the serialized tree
blob and a single node have to alive at the same time. Using the
iterator has implications for the error handling however. Now it is
necessary that all loops that iterate through a tree check for errors
before using the node returned by the iterator.

The other change is that it is no longer possible to iterate over a tree
multiple times. Instead it must be loaded a second time. This only
affects the tree rewriting code.
2026-01-31 20:03:38 +01:00
Michael Eischer
1e183509d4 data: rework StreamTrees to use synchronous callbacks
The tree.Nodes will be replaced by an iterator to loads and serializes
tree node ondemand. Thus, the processing moves from StreamTrees into the
callback. Schedule them onto the workers used by StreamTrees for proper
load distribution.
2026-01-31 20:03:38 +01:00
Michael Eischer
25a5aa3520 dump: fix missing error handling if tree cannot be read 2026-01-31 19:18:36 +01:00
Michael Eischer
278e457e1f data: use data.TreeWriter to serialize&write data.Tree
Always serialize trees via TreeJSONBuilder. Add a wrapper called
TreeWriter which combines serialization and saving the tree blob in the
repository. In the future, TreeJSONBuilder will have to upload tree
chunks while the tree is still serialized. This will a wrapper like
TreeWriter, so add it right now already.

The archiver.treeSaver still directly uses the TreeJSONBuilder as it
requires special handling.
2026-01-31 19:18:36 +01:00
Michael Eischer
f84d398989 repository: prevent test deadlock within WithBlobUploader
Calling t.Fatal internally triggers runtime.Goexit . This kills the
current goroutine while only running deferred code. Add an extra context
that gets canceled if the go routine exits while within the user
provided callback.
2026-01-31 19:18:36 +01:00
Michael Eischer
d82ea53735 data: fix invalid trees used in test cases
data.TestCreateSnapshot which is used in particular by TestFindUsedBlobs
and TestFindUsedBlobs could generate trees with duplicate file names.
This is invalid and going forward will result in an error.
2026-01-31 19:18:36 +01:00
Michael Eischer
34fdf5ba96 Merge pull request #5636 from MichaelEischer/clarify-parameter-docs 2026-01-31 19:15:24 +01:00
Michael Eischer
70591f00ed Merge pull request #5690 from restic/backend-no-restic-imports 2026-01-31 19:13:07 +01:00
Michael Eischer
4bc6bb7e27 slightly reduce redundant wording 2026-01-31 19:07:06 +01:00
Michael Eischer
2628daba97 CI: prevent backends from importing internal/restic package 2026-01-31 12:00:04 +01:00
Martin Castillo
2269ec82e1 man: add note about append-only mode to restic-unlock.1
Since restic unlock _removes_ locks, a user might wonder
whether this works in append-only mode, which prevents deletion of
data.  This change explicitly mentions in the man page that the deletion
of locks is an allowed exception.
2026-01-27 12:07:47 +01:00
Winfried Plappert
86ccc6d445 Bugfix: restic check: add missing finalizeSnapshotFilter() (#5644)
add missing finalizeSnapshotFilter() to cmd.RunE()

---------

Co-authored-by: Michael Eischer <michael.eischer@fau.de>
2026-01-26 21:08:15 +00:00
Michael Eischer
d0a5d0e2f7 Merge pull request #5657 from restic/dependabot/go_modules/github.com/spf13/cobra-1.10.2
build(deps): bump github.com/spf13/cobra from 1.10.1 to 1.10.2
2026-01-26 21:52:26 +01:00
Michael Eischer
fa13f1895f Merge pull request #5658 from restic/dependabot/go_modules/github.com/elithrar/simple-scrypt-1.4.0
build(deps): bump github.com/elithrar/simple-scrypt from 1.3.0 to 1.4.0
2026-01-26 21:48:28 +01:00
Michael Eischer
880b08f9ec Merge pull request #5627 from MichaelEischer/faster-files-writer
restore: tune fileswriter
2026-01-26 21:45:49 +01:00
dependabot[bot]
1368db5777 build(deps): bump github.com/spf13/cobra from 1.10.1 to 1.10.2
Bumps [github.com/spf13/cobra](https://github.com/spf13/cobra) from 1.10.1 to 1.10.2.
- [Release notes](https://github.com/spf13/cobra/releases)
- [Commits](https://github.com/spf13/cobra/compare/v1.10.1...v1.10.2)

---
updated-dependencies:
- dependency-name: github.com/spf13/cobra
  dependency-version: 1.10.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-26 20:40:43 +00:00
Michael Eischer
f78e3f369d Merge pull request #5637 from MichaelEischer/docs-table-width
docs: fix table width
2026-01-26 21:38:34 +01:00
Michael Eischer
39271a9984 Merge pull request #5656 from restic/dependabot/go_modules/cloud.google.com/go/storage-1.58.0
build(deps): bump cloud.google.com/go/storage from 1.57.2 to 1.58.0
2026-01-26 21:37:24 +01:00
Michael Eischer
2c1e8a0412 Merge pull request #5655 from restic/dependabot/go_modules/github.com/klauspost/compress-1.18.2
build(deps): bump github.com/klauspost/compress from 1.18.1 to 1.18.2
2026-01-26 21:36:53 +01:00
Michael Eischer
155372404a Merge pull request #5654 from restic/dependabot/go_modules/golang-x-deps-f1409dc592
build(deps): bump the golang-x-deps group with 7 updates
2026-01-26 21:36:18 +01:00
Ilya Grigoriev
79c37f3d1a ui: mention compressed size of added files in backup -vv
This is already shown for modified files, but the added files message
wasn't updated when compression was implemented in restic.
2026-01-15 18:39:16 -08:00
dependabot[bot]
80531dbe53 build(deps): bump github.com/elithrar/simple-scrypt from 1.3.0 to 1.4.0
Bumps [github.com/elithrar/simple-scrypt](https://github.com/elithrar/simple-scrypt) from 1.3.0 to 1.4.0.
- [Release notes](https://github.com/elithrar/simple-scrypt/releases)
- [Commits](https://github.com/elithrar/simple-scrypt/compare/v1.3.0...v1.4.0)

---
updated-dependencies:
- dependency-name: github.com/elithrar/simple-scrypt
  dependency-version: 1.4.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-01 01:02:15 +00:00
dependabot[bot]
40fe9f34e7 build(deps): bump cloud.google.com/go/storage from 1.57.2 to 1.58.0
Bumps [cloud.google.com/go/storage](https://github.com/googleapis/google-cloud-go) from 1.57.2 to 1.58.0.
- [Release notes](https://github.com/googleapis/google-cloud-go/releases)
- [Changelog](https://github.com/googleapis/google-cloud-go/blob/main/CHANGES.md)
- [Commits](https://github.com/googleapis/google-cloud-go/compare/storage/v1.57.2...spanner/v1.58.0)

---
updated-dependencies:
- dependency-name: cloud.google.com/go/storage
  dependency-version: 1.58.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-01 01:02:06 +00:00
dependabot[bot]
4d0ec87f35 build(deps): bump github.com/klauspost/compress from 1.18.1 to 1.18.2
Bumps [github.com/klauspost/compress](https://github.com/klauspost/compress) from 1.18.1 to 1.18.2.
- [Release notes](https://github.com/klauspost/compress/releases)
- [Commits](https://github.com/klauspost/compress/compare/v1.18.1...v1.18.2)

---
updated-dependencies:
- dependency-name: github.com/klauspost/compress
  dependency-version: 1.18.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-01 01:01:51 +00:00
dependabot[bot]
d6f376b6c8 build(deps): bump the golang-x-deps group with 7 updates
Bumps the golang-x-deps group with 7 updates:

| Package | From | To |
| --- | --- | --- |
| [golang.org/x/crypto](https://github.com/golang/crypto) | `0.45.0` | `0.46.0` |
| [golang.org/x/net](https://github.com/golang/net) | `0.47.0` | `0.48.0` |
| [golang.org/x/oauth2](https://github.com/golang/oauth2) | `0.33.0` | `0.34.0` |
| [golang.org/x/sync](https://github.com/golang/sync) | `0.18.0` | `0.19.0` |
| [golang.org/x/sys](https://github.com/golang/sys) | `0.38.0` | `0.39.0` |
| [golang.org/x/term](https://github.com/golang/term) | `0.37.0` | `0.38.0` |
| [golang.org/x/text](https://github.com/golang/text) | `0.31.0` | `0.32.0` |


Updates `golang.org/x/crypto` from 0.45.0 to 0.46.0
- [Commits](https://github.com/golang/crypto/compare/v0.45.0...v0.46.0)

Updates `golang.org/x/net` from 0.47.0 to 0.48.0
- [Commits](https://github.com/golang/net/compare/v0.47.0...v0.48.0)

Updates `golang.org/x/oauth2` from 0.33.0 to 0.34.0
- [Commits](https://github.com/golang/oauth2/compare/v0.33.0...v0.34.0)

Updates `golang.org/x/sync` from 0.18.0 to 0.19.0
- [Commits](https://github.com/golang/sync/compare/v0.18.0...v0.19.0)

Updates `golang.org/x/sys` from 0.38.0 to 0.39.0
- [Commits](https://github.com/golang/sys/compare/v0.38.0...v0.39.0)

Updates `golang.org/x/term` from 0.37.0 to 0.38.0
- [Commits](https://github.com/golang/term/compare/v0.37.0...v0.38.0)

Updates `golang.org/x/text` from 0.31.0 to 0.32.0
- [Release notes](https://github.com/golang/text/releases)
- [Commits](https://github.com/golang/text/compare/v0.31.0...v0.32.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-version: 0.46.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-deps
- dependency-name: golang.org/x/net
  dependency-version: 0.48.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-deps
- dependency-name: golang.org/x/oauth2
  dependency-version: 0.34.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-deps
- dependency-name: golang.org/x/sync
  dependency-version: 0.19.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-deps
- dependency-name: golang.org/x/sys
  dependency-version: 0.39.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-deps
- dependency-name: golang.org/x/term
  dependency-version: 0.38.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-deps
- dependency-name: golang.org/x/text
  dependency-version: 0.32.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-deps
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-01-01 01:01:44 +00:00
fabien-joubert
8179c4f676 docs: add warning for capability-based non-root backups 2025-12-27 22:38:16 +01:00
Michael Eischer
9e2d60e28c Merge pull request #5632 from restic/dependabot/go_modules/github.com/minio/minio-go/v7-7.0.97
build(deps): bump github.com/minio/minio-go/v7 from 7.0.95 to 7.0.97
2025-12-03 21:34:27 +01:00
Michael Eischer
ebc51e60c9 Merge pull request #5626 from MichaelEischer/lazy-status
ui: only redraw status bar if it has not changed
2025-12-03 21:29:35 +01:00
dependabot[bot]
a9a13afcec build(deps): bump github.com/minio/minio-go/v7 from 7.0.95 to 7.0.97
Bumps [github.com/minio/minio-go/v7](https://github.com/minio/minio-go) from 7.0.95 to 7.0.97.
- [Release notes](https://github.com/minio/minio-go/releases)
- [Commits](https://github.com/minio/minio-go/compare/v7.0.95...v7.0.97)

---
updated-dependencies:
- dependency-name: github.com/minio/minio-go/v7
  dependency-version: 7.0.97
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-03 20:24:30 +00:00
Michael Eischer
d7b87cedbc Merge pull request #5630 from restic/dependabot/go_modules/github.com/ncw/swift/v2-2.0.5
build(deps): bump github.com/ncw/swift/v2 from 2.0.4 to 2.0.5
2025-12-03 21:23:18 +01:00
Michael Eischer
a8be8e36fa Merge pull request #5621 from MichaelEischer/copy-stream-snapshots
copy: iterate through snapshots
2025-12-03 21:21:05 +01:00
dependabot[bot]
74f72ec707 build(deps): bump github.com/ncw/swift/v2 from 2.0.4 to 2.0.5
Bumps [github.com/ncw/swift/v2](https://github.com/ncw/swift) from 2.0.4 to 2.0.5.
- [Release notes](https://github.com/ncw/swift/releases)
- [Changelog](https://github.com/ncw/swift/blob/master/RELEASE.md)
- [Commits](https://github.com/ncw/swift/compare/v2.0.4...v2.0.5)

---
updated-dependencies:
- dependency-name: github.com/ncw/swift/v2
  dependency-version: 2.0.5
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-03 20:06:29 +00:00
Michael Eischer
0b0b714b84 Merge pull request #5628 from MichaelEischer/cleanup-old-build-lines
remove old // +build comments
2025-12-03 21:06:11 +01:00
Michael Eischer
3df4582b2b Merge pull request #5635 from restic/dependabot/github_actions/golangci/golangci-lint-action-9
build(deps): bump golangci/golangci-lint-action from 8 to 9
2025-12-03 21:03:55 +01:00
Michael Eischer
a24184357e Merge pull request #5634 from restic/dependabot/github_actions/actions/checkout-6
build(deps): bump actions/checkout from 5 to 6
2025-12-03 21:00:50 +01:00
Michael Eischer
0d024ad046 Merge pull request #5631 from restic/dependabot/go_modules/github.com/Azure/azure-sdk-for-go/sdk/azidentity-1.13.1
build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/azidentity from 1.13.0 to 1.13.1
2025-12-03 20:58:55 +01:00
Michael Eischer
3efd7b5fd0 Merge pull request #5629 from restic/dependabot/go_modules/github.com/klauspost/compress-1.18.1
build(deps): bump github.com/klauspost/compress from 1.18.0 to 1.18.1
2025-12-03 20:58:48 +01:00
Michael Eischer
4fd9bfc32b docs: fix table width 2025-12-03 20:38:21 +01:00
Michael Eischer
7a3b06f78a docs: move environment variables to the scripting section 2025-12-03 18:38:36 +01:00
Michael Eischer
a58d176500 docs: clarify that parameter tuning applies to all commands 2025-12-03 18:33:00 +01:00
dependabot[bot]
0af1257184 build(deps): bump golangci/golangci-lint-action from 8 to 9
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 8 to 9.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v8...v9)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-version: '9'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-01 01:14:24 +00:00
dependabot[bot]
a3f1c65022 build(deps): bump actions/checkout from 5 to 6
Bumps [actions/checkout](https://github.com/actions/checkout) from 5 to 6.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-01 01:14:22 +00:00
dependabot[bot]
fa4ca9b5b4 build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/azidentity
Bumps [github.com/Azure/azure-sdk-for-go/sdk/azidentity](https://github.com/Azure/azure-sdk-for-go) from 1.13.0 to 1.13.1.
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.13.0...sdk/azidentity/v1.13.1)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azidentity
  dependency-version: 1.13.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-01 01:04:00 +00:00
dependabot[bot]
ebdeecde42 build(deps): bump github.com/klauspost/compress from 1.18.0 to 1.18.1
Bumps [github.com/klauspost/compress](https://github.com/klauspost/compress) from 1.18.0 to 1.18.1.
- [Release notes](https://github.com/klauspost/compress/releases)
- [Commits](https://github.com/klauspost/compress/compare/v1.18.0...v1.18.1)

---
updated-dependencies:
- dependency-name: github.com/klauspost/compress
  dependency-version: 1.18.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-01 01:03:43 +00:00
Michael Eischer
1e6ed458ff remove old // +build comments 2025-11-30 11:53:23 +01:00
Michael Eischer
760d0220f4 restorer: scale file cache with workers count 2025-11-30 11:01:01 +01:00
Michael Eischer
24fcfeafcb restore: cache file descriptors
This avoid opening and closing files after each single blob write
2025-11-30 10:56:15 +01:00
Michael Eischer
0ee9360f3e restore: reduce contention while writing files 2025-11-29 23:09:04 +01:00
Michael Eischer
ae6d6bd9a6 ui: only redraw status bar if it has not changed 2025-11-29 22:09:41 +01:00
Aneesh N
b9afdf795e Fix: Correctly restore ACL inheritance state (#5465)
* Fix: Correctly restore ACL inheritance state

When restoring a file or directory on Windows, the `IsInherited` property of its Access Control Entries (ACEs) was always being set to `False`, even if the ACEs were inherited in the original backup.

This was caused by the restore process calling the `SetNamedSecurityInfo` API without providing context about the object's inheritance policy. By default, this API applies the provided Discretionary Access Control List (DACL) as an explicit set of permissions, thereby losing the original inheritance state.

This commit fixes the issue by inspecting the `Control` flags of the saved Security Descriptor during restore. Based on whether the `SE_DACL_PROTECTED` flag is present, the code now adds the appropriate `PROTECTED_DACL_SECURITY_INFORMATION` or `UNPROTECTED_DACL_SECURITY_INFORMATION` flag to the `SetNamedSecurityInfo` API call.

By providing this crucial inheritance context, the Windows API can now correctly reconstruct the ACL, ensuring the `IsInherited` status of each ACE is preserved as it was at the time of backup.

* Fix: Correctly restore ACL inheritance flags

This commit resolves an issue where the ACL inheritance state (`IsInherited` property) was not being correctly restored for files and directories on Windows.

The root cause was that the `SECURITY_INFORMATION` flags used in the `SetNamedSecurityInfo` API call contained both the `PROTECTED_DACL_SECURITY_INFORMATION` and `UNPROTECTED_DACL_SECURITY_INFORMATION` flags simultaneously. When faced with this conflicting information, the Windows API defaulted to the more restrictive `PROTECTED` behavior, incorrectly disabling inheritance on restored items.

The fix modifies the `setNamedSecurityInfoHigh` function to first clear all existing inheritance-related flags from the `securityInfo` bitmask. It then adds the single, correct flag (`PROTECTED` or `UNPROTECTED`) based on the `SE_DACL_PROTECTED` control bit from the original, saved Security Descriptor.

This ensures that the API receives unambiguous instructions, allowing it to correctly preserve the inheritance state as it was at the time of backup. The accompanying test case for ACL inheritance now passes with this change.

* Fix inheritance flag handling in low-privilege security descriptor restore

When restoring files without admin privileges, the IsInherited property
of Access Control Entries (ACEs) was not being preserved correctly.
The low-privilege restore path (setNamedSecurityInfoLow) was using a
static PROTECTED_DACL_SECURITY_INFORMATION flag, which always marked
the restored DACL as explicitly set rather than inherited.

This commit updates setNamedSecurityInfoLow to dynamically determine
the correct inheritance flag based on the SE_DACL_PROTECTED control
flag from the original security descriptor, matching the behavior of
the high-privilege path (setNamedSecurityInfoHigh).

Changes:
- Update setNamedSecurityInfoLow to accept control flags parameter
- Add logic to set either PROTECTED_DACL_SECURITY_INFORMATION or
  UNPROTECTED_DACL_SECURITY_INFORMATION based on the original SD
- Add TestRestoreSecurityDescriptorInheritanceLowPrivilege to verify
  inheritance is correctly restored in low-privilege scenarios

This ensures that both admin and non-admin restore operations correctly
preserve the inheritance state of ACLs, maintaining the original
permissions flow on child objects.

Addresses review feedback on PR for issue #5427

* Refactor security flags into separate backup/restore variants

Split highSecurityFlags into highBackupSecurityFlags and
highRestoreSecurityFlags to avoid runtime bitwise operations.
This makes the code cleaner and more maintainable by using
appropriate flags for GET vs SET operations.

Addresses review feedback on PR for issue #5427

---------

Co-authored-by: Aneesh Nireshwalia <anireshw@akamai.com>
2025-11-28 19:22:47 +00:00
Winfried Plappert
ce57961f14 restic check with snapshot filters (#5469)
---------

Co-authored-by: Michael Eischer <michael.eischer@fau.de>
2025-11-28 19:12:38 +00:00
Michael Eischer
e1bc2fb71a copy: iterate through snapshots 2025-11-26 22:48:54 +01:00
Michael Eischer
8fdbdc57a0 Merge pull request #5581 from restic/dependabot/go_modules/google.golang.org/api-0.254.0
build(deps): bump google.golang.org/api from 0.248.0 to 0.254.0
2025-11-26 22:24:43 +01:00
dependabot[bot]
69ac0d84ac build(deps): bump google.golang.org/api from 0.248.0 to 0.254.0
Bumps [google.golang.org/api](https://github.com/googleapis/google-api-go-client) from 0.248.0 to 0.254.0.
- [Release notes](https://github.com/googleapis/google-api-go-client/releases)
- [Changelog](https://github.com/googleapis/google-api-go-client/blob/main/CHANGES.md)
- [Commits](https://github.com/googleapis/google-api-go-client/compare/v0.248.0...v0.254.0)

---
updated-dependencies:
- dependency-name: google.golang.org/api
  dependency-version: 0.254.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-26 21:12:48 +00:00
Michael Eischer
0a96f0d623 Merge pull request #5578 from restic/dependabot/go_modules/cloud.google.com/go/storage-1.57.1
build(deps): bump cloud.google.com/go/storage from 1.56.1 to 1.57.1
2025-11-26 22:12:09 +01:00
Michael Eischer
0d8b715d92 Merge pull request #5547 from restic/dependabot/go_modules/golang-x-deps-3a742399ff
build(deps): bump the golang-x-deps group with 8 updates
2025-11-26 22:11:35 +01:00
dependabot[bot]
31e3717b25 build(deps): bump the golang-x-deps group with 8 updates
Bumps the golang-x-deps group with 8 updates:

| Package | From | To |
| --- | --- | --- |
| [golang.org/x/crypto](https://github.com/golang/crypto) | `0.41.0` | `0.42.0` |
| [golang.org/x/net](https://github.com/golang/net) | `0.43.0` | `0.44.0` |
| [golang.org/x/oauth2](https://github.com/golang/oauth2) | `0.30.0` | `0.31.0` |
| [golang.org/x/sync](https://github.com/golang/sync) | `0.16.0` | `0.17.0` |
| [golang.org/x/sys](https://github.com/golang/sys) | `0.35.0` | `0.36.0` |
| [golang.org/x/term](https://github.com/golang/term) | `0.34.0` | `0.35.0` |
| [golang.org/x/text](https://github.com/golang/text) | `0.28.0` | `0.29.0` |
| [golang.org/x/time](https://github.com/golang/time) | `0.12.0` | `0.13.0` |


Updates `golang.org/x/crypto` from 0.41.0 to 0.42.0
- [Commits](https://github.com/golang/crypto/compare/v0.41.0...v0.42.0)

Updates `golang.org/x/net` from 0.43.0 to 0.44.0
- [Commits](https://github.com/golang/net/compare/v0.43.0...v0.44.0)

Updates `golang.org/x/oauth2` from 0.30.0 to 0.31.0
- [Commits](https://github.com/golang/oauth2/compare/v0.30.0...v0.31.0)

Updates `golang.org/x/sync` from 0.16.0 to 0.17.0
- [Commits](https://github.com/golang/sync/compare/v0.16.0...v0.17.0)

Updates `golang.org/x/sys` from 0.35.0 to 0.36.0
- [Commits](https://github.com/golang/sys/compare/v0.35.0...v0.36.0)

Updates `golang.org/x/term` from 0.34.0 to 0.35.0
- [Commits](https://github.com/golang/term/compare/v0.34.0...v0.35.0)

Updates `golang.org/x/text` from 0.28.0 to 0.29.0
- [Release notes](https://github.com/golang/text/releases)
- [Commits](https://github.com/golang/text/compare/v0.28.0...v0.29.0)

Updates `golang.org/x/time` from 0.12.0 to 0.13.0
- [Commits](https://github.com/golang/time/compare/v0.12.0...v0.13.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-version: 0.42.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-deps
- dependency-name: golang.org/x/net
  dependency-version: 0.44.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-deps
- dependency-name: golang.org/x/oauth2
  dependency-version: 0.31.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-deps
- dependency-name: golang.org/x/sync
  dependency-version: 0.17.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-deps
- dependency-name: golang.org/x/sys
  dependency-version: 0.36.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-deps
- dependency-name: golang.org/x/term
  dependency-version: 0.35.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-deps
- dependency-name: golang.org/x/text
  dependency-version: 0.29.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-deps
- dependency-name: golang.org/x/time
  dependency-version: 0.13.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: golang-x-deps
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-26 20:56:35 +00:00
dependabot[bot]
42133ccffe build(deps): bump cloud.google.com/go/storage from 1.56.1 to 1.57.1
Bumps [cloud.google.com/go/storage](https://github.com/googleapis/google-cloud-go) from 1.56.1 to 1.57.1.
- [Release notes](https://github.com/googleapis/google-cloud-go/releases)
- [Changelog](https://github.com/googleapis/google-cloud-go/blob/main/CHANGES.md)
- [Commits](https://github.com/googleapis/google-cloud-go/compare/storage/v1.56.1...storage/v1.57.1)

---
updated-dependencies:
- dependency-name: cloud.google.com/go/storage
  dependency-version: 1.57.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-26 20:56:27 +00:00
Michael Eischer
77374b5bf0 Merge pull request #5619 from restic/bump-go-version
bump minimum go version to 1.24
2025-11-26 21:55:02 +01:00
Michael Eischer
f3a89bfff6 Merge pull request #5612 from MichaelEischer/repository-async-saveblob
repository: add async blob upload method
2025-11-26 21:34:35 +01:00
Michael Eischer
7696e4b495 bump minimum go version to 1.24 2025-11-26 21:33:40 +01:00
Michael Eischer
5cc8636047 Merge pull request #5614 from MichaelEischer/fix-lookupblobsize
repository: fix LookupBlobSize to also return pending blobs
2025-11-26 21:24:32 +01:00
Michael Eischer
6769d26068 archiver: improve test reliability 2025-11-26 21:21:16 +01:00
Michael Eischer
5607fd759f repository: fix race condition for blobSaver shutdown
wg.Go() may not be called after wg.Wait(). This prevents connecting two
errgroups such that the errors are propagated between them if the child
errgroup dynamically starts goroutines. Instead use just a single errgroup,
and sequence the shutdown using a sync.WaitGroup. This is far simpler
and does not require any "clever" tricks.
2025-11-26 21:18:22 +01:00
Michael Eischer
9f87e9096a repository: add tests for SaveBlobAsync 2025-11-26 21:18:22 +01:00
Michael Eischer
d8dcd6d115 archiver: add buffer test 2025-11-26 21:18:22 +01:00
Michael Eischer
3f92987974 archiver: assert number of uploaded chunks in fileSaver test 2025-11-26 21:18:22 +01:00
Michael Eischer
7f6fdcc52c archiver: convert buffer pool to use sync.Pool 2025-11-26 21:18:22 +01:00
Michael Eischer
dd6cb0dd8e archiver: port to repository.SaveBlobAsync 2025-11-26 21:18:22 +01:00
Michael Eischer
046b0e711d repository: add SaveBlobAsync method 2025-11-26 21:18:21 +01:00
Michael Eischer
4d2da63829 Merge pull request #5610 from MichaelEischer/associated-blob-set-everywhere
check/copy/diff/stats: reduce memory usage
2025-11-26 21:09:26 +01:00
Michael Eischer
134893bd35 copy: use AssociatedBlobSet to keep track of processed trees 2025-11-26 21:00:18 +01:00
Michael Eischer
7b59dd7cf4 add changelog 2025-11-26 20:59:39 +01:00
Michael Eischer
84dda4dc74 check: use AssociatedBlobSet 2025-11-26 20:59:39 +01:00
Michael Eischer
46ebee948f stats: use AssociatedBlobSet 2025-11-26 20:59:39 +01:00
Michael Eischer
d91fe1d7e1 diff: use AssociatedBlobSet 2025-11-26 20:59:39 +01:00
Michael Eischer
ff099a216a copy: use AssociatedBlobSet 2025-11-26 20:59:38 +01:00
Michael Eischer
07d090f233 repository: expose AssociatedBlobSet via repository interface 2025-11-26 20:59:08 +01:00
Michael Eischer
0f05277b47 index: add sub and intersect method to AssociatedSet 2025-11-26 20:59:08 +01:00
Michael Eischer
7e80536a9b Merge pull request #5472 from wplapper/cmd_copy_stream
restic copy --stream: run one large copy operation crossing snapshot boundaries - issue #5453
2025-11-26 20:57:46 +01:00
Michael Eischer
f9e5660e75 output which source and target snapshot belong together 2025-11-23 22:01:53 +01:00
Michael Eischer
e79b01d82f more aggressive batching 2025-11-23 21:46:03 +01:00
Michael Eischer
857b42fca4 merge into existing copy test 2025-11-23 19:08:49 +01:00
Michael Eischer
39db78446f Simplify test 2025-11-23 19:05:55 +01:00
Michael Eischer
f1aabdd293 index: add test for pending blobs 2025-11-23 18:08:56 +01:00
Michael Eischer
50d376c543 repository: fix LookupBlobSize to also report pending blobs 2025-11-23 17:55:13 +01:00
Michael Eischer
7d08c9282a align docs 2025-11-23 17:51:07 +01:00
Michael Eischer
cf409b7c66 automatically batch snapshots in copy 2025-11-23 17:40:37 +01:00
Michael Eischer
f95dc73d38 deduplicate blob enqueuing 2025-11-23 17:13:10 +01:00
Michael Eischer
63bc1405ea unify snapshot copy codepaths 2025-11-23 17:12:54 +01:00
Michael Eischer
405813f250 repository: fix LookupBlobSize to also report pending blobs 2025-11-23 17:09:07 +01:00
Michael Eischer
05364500b6 use correct context 2025-11-23 16:25:09 +01:00
Michael Eischer
e775192fe7 don't sort snapshots, drop duplicate code and cleanup copyTreeBatched function signature 2025-11-23 16:20:40 +01:00
Michael Eischer
4395a77154 copy: remove bugous seenBlobs set 2025-11-23 16:06:45 +01:00
Michael Eischer
81d8bc4ade repository: replace CopyBlobs with Repack implementation 2025-11-23 16:06:29 +01:00
Michael Eischer
d681b8af5e Merge pull request #5611 from insertish/docs/scripting-tag-schema
docs: correct the schema provided for tag summary
2025-11-23 15:45:53 +01:00
izzy
629eaa5d21 docs: correct the schema provided for tag summary 2025-11-20 17:35:25 +00:00
Michael Eischer
6174c91042 Merge pull request #5588 from seqizz/g_timezoneshow
snapshots: Show timezone in non-compact output
2025-11-19 22:06:37 +01:00
Winfried Plappert
b24b088978 restic copy --batch: The mighty linter
I cave in - no double comment
2025-11-19 07:34:39 +00:00
Winfried Plappert
fc3de018bc restic copy --batch - fussy linter
internal/repository/repack.go: I have to please the mighty linter.
2025-11-19 07:29:09 +00:00
Winfried Plappert
b87f7586e4 restic copy --batch: a fresh start from commit 382616747
Instead of rebasing my code, I decided to start fresh, since WithBlobUploader()
has been introduced.

changelog/unreleased/issue-5453:
doc/045_working_with_repos.rst:
the usual

cmd/restic/cmd_copy.go:
gather all snaps to be collected - collectAllSnapshots()
run overall copy step - func copyTreeBatched()
helper copySaveSnapshot() to save the corresponding snapshot

internal/repository/repack.go:
introduce wrapper CopyBlobs(), which passes parameter `uploader restic.BlobSaver` from
WithBlobUploader() via copyTreeBatched() to repack().

internal/backend/local/local_windows.go:
I did not touch it, but gofmt did: whitespace
2025-11-19 07:09:24 +00:00
Gürkan
dc4e9b31f6 snapshots: Show timezone in non-compact output 2025-11-18 13:32:44 +01:00
Michael Eischer
8767549367 Merge pull request #5601 from MichaelEischer/snapshots-fix-groupby-with-latest
snapshots: correctly handle --latest in combination with --group-by
2025-11-17 22:50:50 +01:00
Michael Eischer
5afe61585b snapshots: correctly handle --latest in combination with --group-by 2025-11-17 22:26:57 +01:00
Michael Eischer
46f3ece883 Merge pull request #5597 from MichaelEischer/bump-go-for-standalone-docker
bump go version in dockerfile to go 1.25
2025-11-17 22:05:45 +01:00
Michael Eischer
96adbbaa42 Merge pull request #5599 from MichaelEischer/prune-clean-error
prune: return proper error if blob cannot be found
2025-11-17 22:05:17 +01:00
Michael Eischer
7297047b71 Merge pull request #5600 from MichaelEischer/docs-tmp-var-on-windows
only suggest TMP as tmp dir variable on windows
2025-11-17 22:04:42 +01:00
Michael Eischer
132f2f8a23 Merge pull request #5602 from MichaelEischer/fix-flaky-rclone-test
rclone: fix rare test failure if rclone cannot be started
2025-11-17 22:04:10 +01:00
Michael Eischer
a519d1e8df Merge pull request #5603 from MichaelEischer/debug-flaky-windows-test
restore: enable debug logging for flaky windows test
2025-11-17 22:03:38 +01:00
Paulo Saraiva
c1a89d5150 Allow for a personal token to be specified for self-updates (#5568)
* Allow for a personal token to be specified for self-updates

This change will allow for setting the $GITHUB_ACCESS_TOKEN environment variable with a Github personal access token, allowing e.g. for higher rate limits

* Refactor github request and add test

---------

Co-authored-by: Paulo Saraiva <pauloman@cern.ch>
2025-11-17 21:39:39 +01:00
Michael Eischer
3826167474 Merge pull request #5424 from Crazycatz00/sebackup-fixes
Windows Backup Privilege Tweaks
2025-11-16 21:35:35 +01:00
Michael Eischer
98f56d8ada restore: enable debug logging for flaky windows test 2025-11-16 20:24:19 +01:00
Michael Eischer
1caeb2aa4d rclone: fix rare test failure if rclone cannot be started 2025-11-16 20:14:21 +01:00
crazycatz00
3ab68d4d11 fs: Clarified documentation 2025-11-16 11:53:13 -05:00
Michael Eischer
ffc5e9bd5c only suggest TMP as tmp dir variable on windows
TMP takes precedence over TEMP.
2025-11-16 17:31:36 +01:00
Michael Eischer
0ff3e20c4b prune: return proper error if blob cannot be found 2025-11-16 17:04:03 +01:00
Michael Eischer
3b854d9c04 Merge pull request #5449 from provokateurin/restore-ownership-by-name
feat(internal/fs/node): Restore ownership by name
2025-11-16 16:50:36 +01:00
ferringb
87f26accb7 feat: add integrated nice and ionice options for docker (#5448)
The intended usage here is to basically kick restic as a background
"do it, but don't bother my normal load" process.

This allows passing the following environment variables in to
influence scheduling:

- NICE: usual CPU nice.  Defaults to 0.  This requires CAP_SYS_NICE
  to set a negative nice (IE, prioritize).
- IONICE_CLASS: usual ionice class.  Note that setting realtime
  requires CAP_SYS_ADMIN.  Also note the actual ionice default
  is "none".
- IONICE_PRIORITY: set the priority within the given class.  Ignored
  if no class is specified due to class default of "no scheduler".

---------

Signed-off-by: Brian Harring <ferringb@gmail.com>
Co-authored-by: Michael Eischer <michael.eischer@fau.de>
2025-11-16 16:42:33 +01:00
provokateurin
8fae46011a feat(internal/fs/node): Restore ownership by name 2025-11-16 16:40:58 +01:00
Michael Eischer
c854338ad1 Merge pull request #5596 from mikix/chmod-again
backend/local: fix "operation not supported" when unlocking
2025-11-16 14:25:04 +01:00
Michael Eischer
10a10b8d63 bump go version in dockerfile to go 1.25
Note that this go version is independent of that used for the official
release binaries.
2025-11-16 14:22:43 +01:00
Michael Terry
7f3e3b77ce backend/local: fix "operation not supported" when unlocking
If the repo is on a mounted folder that doesn't support chmod (like
SMB), it was causing an "operation not supported" error when trying to
chmod 666 a file before deleting it.

But it isn't generally needed before deleting a file (the folder
permissions matter there, not the file permissions). So, just drop it.
2025-11-16 08:09:51 -05:00
Michael Eischer
d81f95c777 Merge pull request #5464 from wplapper/cmd_copy_v2
restic copy - add more status counters - issue #5175
2025-11-16 13:55:41 +01:00
DoS007
2bd6649813 docs: add info about ssd wear in backend connections (#5496)
---------

Co-authored-by: Michael Eischer <michael.eischer@fau.de>
2025-11-16 12:48:56 +00:00
Winfried Plappert
3b71c44755 restic copy - stattistics counters
fixed typo in changelog/unreleased/pull-5319
2025-11-16 13:47:11 +01:00
Winfried Plappert
1e3b96bf99 restic copy - statistics feature
reword the description od the PR
2025-11-16 13:47:11 +01:00
Winfried Plappert
25611f4628 restic copy - add statistics counters
cmd/restic/cmd_copy.go:
add function copyStats() and call it before the actual copying starts.

changelog/unreleased/pull-5319:
rephrased wording of the statistics counters.
2025-11-16 13:47:10 +01:00
Winfried Plappert
90ac3efa88 restic copy - add additional status counters
'copyTree()' now counts and sizes the blobs in 'copyBlobs' and prints them out
via 'Verbosef()'.
2025-11-16 13:46:27 +01:00
Michael Eischer
5b173d2206 Merge pull request #5567 from Paulomen2712/add_better_forget_example_docs
Improve example for forget --keep-daily
2025-11-16 13:45:13 +01:00
Michael Eischer
14f3bc8232 Merge pull request #5560 from MichaelEischer/index-iterators
index: port to  modern Go iterators
2025-11-16 13:24:48 +01:00
Michael Eischer
4ef7b4676b Merge pull request #5559 from MichaelEischer/cleanup-repack
repository: remove unused return value from Repack
2025-11-16 13:01:00 +01:00
Michael Eischer
b587c126e0 Fix linter warning 2025-11-16 12:56:37 +01:00
Michael Eischer
9944ef7a7c index: convert AssociatedSet to go iterators 2025-11-16 12:56:37 +01:00
Michael Eischer
38c543457e index: convert to implement modern go iterators 2025-11-16 12:56:37 +01:00
Michael Eischer
393e49fc89 repository: update comment 2025-11-16 12:51:46 +01:00
Michael Eischer
a0925fa922 repository: set progress bar maximum in Repack 2025-11-16 12:51:46 +01:00
Michael Eischer
b2afccbd96 repository: remove unused obsoletePacks return values from Repack 2025-11-16 12:51:46 +01:00
Michael Eischer
0624b656b8 Merge pull request #5558 from MichaelEischer/simplify-blob-upload
repository: enforce correct usage of SaveBlob
2025-11-16 12:51:01 +01:00
Brook
fadeb03f84 Update Nix/NixOS installation instructions (#5591)
Corrected spelling errors and updated installation instructions for Nix/NixOS.
2025-11-16 11:31:47 +00:00
Michael Eischer
fc06a79518 Merge pull request #5579 from restic/dependabot/go_modules/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob-1.6.3
build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/storage/azblob from 1.6.2 to 1.6.3
2025-11-16 12:03:02 +01:00
Michael Eischer
d5977deb49 Merge pull request #5580 from restic/dependabot/go_modules/github.com/pkg/sftp-1.13.10
build(deps): bump github.com/pkg/sftp from 1.13.9 to 1.13.10
2025-11-16 12:02:30 +01:00
Michael Eischer
e3b7bbd020 Merge pull request #5552 from ferringb/dockerignore
add a dockerignore
2025-11-16 12:00:52 +01:00
Michael Eischer
157f174dd9 Merge pull request #5370 from hashier/feat/exclude-macOS-cloud-files
feat(backup): add possibility to exclude macOS cloud-only files
2025-11-16 11:57:37 +01:00
Alex Xu
bcc5417dc8 Merge pull request #5386 from Hello71/patch-2
doc: Add ambient caps example, edit file caps
2025-11-16 11:54:43 +01:00
crazycatz00
d14823eb81 fs: Attempt to enable file system privileges on initialization.
Add tests to verify privileges' effects.
2025-11-07 19:31:59 -05:00
crazycatz00
01bf8977e7 fs: Use backup privileges when reading extended attributes for files too. 2025-11-07 19:31:57 -05:00
dependabot[bot]
f5a18a7799 build(deps): bump github.com/pkg/sftp from 1.13.9 to 1.13.10
Bumps [github.com/pkg/sftp](https://github.com/pkg/sftp) from 1.13.9 to 1.13.10.
- [Release notes](https://github.com/pkg/sftp/releases)
- [Commits](https://github.com/pkg/sftp/compare/v1.13.9...v1.13.10)

---
updated-dependencies:
- dependency-name: github.com/pkg/sftp
  dependency-version: 1.13.10
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-01 01:02:06 +00:00
dependabot[bot]
f756c6a441 build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
Bumps [github.com/Azure/azure-sdk-for-go/sdk/storage/azblob](https://github.com/Azure/azure-sdk-for-go) from 1.6.2 to 1.6.3.
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/storage/azblob/v1.6.2...sdk/storage/azblob/v1.6.3)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
  dependency-version: 1.6.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-01 01:02:03 +00:00
Brian Harring
8cbca05853 add a dockerignore
This is strictly for tightening the container to be more hygenic.

Signed-off-by: Brian Harring <ferringb@gmail.com>
2025-10-29 16:18:40 +01:00
Paulo Manuel Ferreira Dos Santos Saraiva
b0eb3652b8 Improve example for forget --keep-daily 2025-10-22 11:31:50 +02:00
Michael Eischer
71432c7f4b Merge pull request #5555 from MichaelEischer/extract-globaloptions
Split globalOptions into separate package
2025-10-12 18:31:44 +02:00
Michael Eischer
c6e33c3954 repository: enforce that SaveBlob is called within WithBlobUploader
This is achieved by removing SaveBlob from the public API and only
returning it via a uploader object that is passed in by
WithBlobUploader.
2025-10-12 18:26:26 +02:00
Michael Eischer
1ef785daa3 Merge pull request #5544 from zmanda/fix-gh-5531-azure-backend-upgrade-service-version
azure: use PutBlob API for uploads instead of PutBlock API + PutBlock List API
2025-10-12 18:24:33 +02:00
Michael Eischer
aa0fb0210a Merge pull request #5556 from greatroar/cleanup
ui/backup: Prepend, then sort (micro-optimization)
2025-10-12 18:22:36 +02:00
Michael Eischer
b6aef592f5 global: split CreateRepository and OpenRepository into smaller functions 2025-10-12 18:20:45 +02:00
Michael Eischer
588c40aaef global: unexport ReadPassword and ReadRepo 2025-10-12 18:08:26 +02:00
Michael Eischer
aa7bd241d9 init: move more logic into global package 2025-10-12 18:08:26 +02:00
Michael Eischer
536a2f38bd Merge pull request #5554 from MichaelEischer/termstatus-flush
termstatus: flush before reading password from terminal
2025-10-12 17:59:03 +02:00
Michael Eischer
a816b827cf extract GlobalOptions into internal/global package
Rough steps:
```
mv cmd/restic/global* cmd/restic/secondary_repo* internal/global/
sed -i "s/package main/package global/" internal/global/*.go
Rename "GlobalOptions" to "Options" in internal/global/
Replace everywhere " GlobalOptions" -> " global.Options"
Replace everywhere "\*GlobalOptions" -> " *global.Options"
Make SecondaryRepoOptions public
Make create public
Make version public
```
2025-10-12 17:56:28 +02:00
Michael Eischer
2c677d8db4 global: make private fields public 2025-10-12 17:56:28 +02:00
Michael Eischer
394c8de502 add package to create a prepopulated backend registry 2025-10-12 17:56:28 +02:00
Michael Eischer
a632f490fa Merge pull request #5550 from MichaelEischer/refactor-check-data-selection
check: refactor pack selection for read data
2025-10-12 17:51:00 +02:00
Michael Eischer
718b97f37f Merge pull request #5551 from restic/slower-terminal-output
Reduce terminal progress fps to 10
2025-10-12 17:47:27 +02:00
Michael Eischer
ac4642b479 repository: replace StartPackUploader+Flush with WithBlobUploader
The new method combines both step into a single wrapper function. Thus
it ensures that both are always called in pairs. As an additional
benefit this slightly reduces the boilerplate to upload blobs.
2025-10-08 22:49:45 +02:00
greatroar
20b38010e1 ui/backup: Prepend, then sort (micro-optimization) 2025-10-06 16:16:37 +02:00
Srigovind Nayak
f9ff2301e8 changelog: add a changelog entry for azure PutBlob API changes 2025-10-05 21:48:02 +05:30
Srigovind Nayak
e65ee3cba8 fix: keep the PutBlock Size to 100 MiB
No complaints in the past.
2025-10-05 21:41:26 +05:30
Srigovind Nayak
34a94afc48 azure: update upload size constants to reduce memory allocation 2025-10-05 21:41:25 +05:30
Srigovind Nayak
9bcd09bde0 azure: reduce singleBlockMaxSize to accommodate 32-bit systems 2025-10-05 21:41:25 +05:30
Srigovind Nayak
e80e832130 azure: remove saveSmall, use only PutBlob API 2025-10-05 21:41:25 +05:30
Srigovind Nayak
dd2d562b7b azure: enhanced upload with single PutBlob API and configurable upload methods 2025-10-05 21:41:25 +05:30
Michael Eischer
e320ef0a62 add changelog 2025-10-05 16:14:16 +02:00
Michael Eischer
30ed992af9 termstatus: flush output before returning OutputRaw() writer
This prevents mangling the output due to delayed messages.
2025-10-05 16:14:16 +02:00
Srigovind Nayak
481fcb9ca7 backup: return exit code 3 if not all targets are available (#5347)
to make the exit code behaviour consistent with files inaccessible during the backup phase, making this change to exit with code 3 if not all target files/folders are accessible for backup

---------

Co-authored-by: Michael Eischer <michael.eischer@fau.de>
2025-10-05 15:38:52 +02:00
Srigovind Nayak
22f254c9ca feat: allow override env RESTIC_HOST with flag to filter all snapshots (#5541) 2025-10-05 13:22:50 +02:00
Michael Eischer
f17027eeaa termstatus: flush before reading password from terminal 2025-10-04 23:06:57 +02:00
Christopher Loessl
f3d95893b2 feat(backup): add possibility to exclude macOS cloud-only files 2025-10-04 19:22:51 +02:00
Michael Eischer
4759e58994 Reduce terminal progress fps to 10 2025-10-04 17:34:40 +02:00
Winfried Plappert
a2a49cf784 list integration test: error scanning 'restic list blobs' (#5311)
Co-authored-by: Michael Eischer <michael.eischer@fau.de>
2025-10-04 12:18:32 +00:00
Michael Eischer
b7bbb408ee check: refactor pack selection for read data
Drop the `packs` map from the internal state of the checker. Instead the
Packs(...) method now calls a filter callback that can select the
packs intended for checking.
2025-10-03 23:45:05 +02:00
Michael Eischer
35fca09326 Merge pull request #5489 from MichaelEischer/fix-group-repos
docs: fix permission setup for group-accessible repo
2025-10-03 23:03:50 +02:00
Michael Eischer
adbd4a1d18 Fully rework docs for group-accessible repositories
Just tell the user what to do instead of explaining too many details.
I've dropped the read-only variant as it actually has no representation
in the local and sftp backends. Instead it relied on both backends
initially creating all directories, which can't actually be guaranteed.

Based on a suggestion by @brad2014 in significant parts.
2025-10-03 21:24:57 +02:00
Michael Eischer
537d107b6c docs: use absolute permissions for group accessible repositories 2025-10-03 21:24:57 +02:00
Michael Eischer
06aa0f08cb docs: fix permission setup for group-accessible repo
The group always needs execute access for the directories. In addition,
files should be always set to read-only for everyone as restic never
modifies files.
2025-10-03 21:24:57 +02:00
Rani
3ae6a69154 Bugfix(sftp): fix loose permissions on sftp backend. (#5497) 2025-10-03 18:20:52 +00:00
Michael Eischer
264cd67c36 Merge pull request #5532 from MichaelEischer/checker-cleanup
Replace Repository.SetIndex with internal helper
2025-10-03 20:08:14 +02:00
Michael Eischer
fd241b8ec7 Merge pull request #5527 from MichaelEischer/drop-s3-static-credentials
s3: drop manual credentials loading from environment
2025-10-03 19:57:55 +02:00
Michael Eischer
76aa9e4f7c Merge pull request #5549 from restic/dependabot/go_modules/github.com/peterbourgon/unixtransport-0.0.7
build(deps): bump github.com/peterbourgon/unixtransport from 0.0.6 to 0.0.7
2025-10-03 19:56:02 +02:00
Michael Eischer
aae1acf4d7 check: fix dysfunctional test cases 2025-10-03 19:49:51 +02:00
dependabot[bot]
cc0480fc32 build(deps): bump github.com/peterbourgon/unixtransport
Bumps [github.com/peterbourgon/unixtransport](https://github.com/peterbourgon/unixtransport) from 0.0.6 to 0.0.7.
- [Release notes](https://github.com/peterbourgon/unixtransport/releases)
- [Commits](https://github.com/peterbourgon/unixtransport/compare/v0.0.6...v0.0.7)

---
updated-dependencies:
- dependency-name: github.com/peterbourgon/unixtransport
  dependency-version: 0.0.7
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-03 17:41:39 +00:00
Michael Eischer
838ef0a9bd Merge pull request #5546 from MichaelEischer/group-golang-dependencies
CI: group dependency updates for golang.org/x/*
2025-10-03 19:40:15 +02:00
Michael Eischer
4426dfe6a9 repository: replace SetIndex method with internal loadIndexWithCallback method 2025-10-03 19:36:57 +02:00
Michael Eischer
f0955fa931 repository: add Checker() method to repository to replace unchecked cast 2025-10-03 19:34:33 +02:00
Michael Eischer
189b295c30 repository: add dedicated test helper 2025-10-03 19:34:33 +02:00
Michael Eischer
82971ad7f0 check: split index/pack check into repository package 2025-10-03 19:34:32 +02:00
Michael Eischer
bfc2ce97fd check: don't keep extra MasterIndex reference 2025-10-03 19:32:15 +02:00
Michael Eischer
d84c3e3c60 CI: group dependency updates for golang.org/x/* 2025-10-03 19:28:30 +02:00
Michael Eischer
93720f0717 Merge pull request #5525 from MichaelEischer/split-restic-directory
Extract snapshot data types from restic package
2025-10-03 19:24:31 +02:00
Michael Eischer
70a24cca85 ignore linter warning 2025-10-03 19:10:40 +02:00
Michael Eischer
56ac8360c7 data: split node and snapshot code from restic package 2025-10-03 19:10:39 +02:00
Michael Eischer
c85b157e0e restic: move interfaces between files to prepare refactor 2025-10-03 19:06:32 +02:00
Michael Eischer
13e476e1eb Merge pull request #5518 from MichaelEischer/termstatus-everywhere
Consolidate terminal input/output functionality in termstatus.Terminal
2025-10-03 19:05:28 +02:00
Michael Eischer
3335f62a8f Fix linter warnings 2025-10-03 18:55:46 +02:00
Michael Eischer
d8da3d2f2d termstatus: increase test coverage 2025-10-03 18:55:46 +02:00
Michael Eischer
df7924f4df node: report error on xattr retrieval using standard error logging 2025-10-03 18:55:46 +02:00
Michael Eischer
f2b9ea6455 termstatus: use errWriter if terminal commands fail 2025-10-03 18:55:46 +02:00
Michael Eischer
711194276c remove unused printer from ReadPassword 2025-10-03 18:55:46 +02:00
Michael Eischer
f045297348 termstatus: fix typo in comment 2025-10-03 18:55:46 +02:00
Michael Eischer
52eb66929f repository: deduplicate index progress bar initializaton 2025-10-03 18:55:46 +02:00
Michael Eischer
b459d66288 termstatus: additional comments 2025-10-03 18:55:46 +02:00
Michael Eischer
76b2cdd4fb replace globalOptions.stdout with termstatus.OutputWriter 2025-10-03 18:55:46 +02:00
Michael Eischer
c293736841 drop unused stderr from GlobalOptions 2025-10-03 18:55:46 +02:00
Michael Eischer
1939cff334 restore: embed progress.Printer in restore-specific printer 2025-10-03 18:55:46 +02:00
Michael Eischer
1a76f988ea backup: embed progress.Printer in backup specific printer 2025-10-03 18:55:46 +02:00
Michael Eischer
e753941ad3 move NewProgressPrinter to ui package 2025-10-03 18:55:46 +02:00
Michael Eischer
ff5a0cc851 termstatus: fully wrap reading password from terminal 2025-10-03 18:55:46 +02:00
Michael Eischer
013c565c29 standardize shorten variable name for GlobalOptions to gopts 2025-10-03 18:55:46 +02:00
Michael Eischer
96af35555a termstatus: add stdin and inject into backup command 2025-10-03 18:55:46 +02:00
Michael Eischer
ca5b0c0249 get rid of fmt.Print* usages 2025-10-03 18:55:46 +02:00
Michael Eischer
3410808dcf deduplicate termstatus setup 2025-10-03 18:55:46 +02:00
Michael Eischer
1ae2d08d1b termstatus: centralize OutputIsTerminal checks 2025-10-03 18:55:46 +02:00
Michael Eischer
c745e4221e termstatus: use errWriter instead of os.Stderr 2025-10-03 18:22:42 +02:00
Michael Eischer
b6c50662da repository: don't ignore cache clearing error 2025-10-03 18:22:42 +02:00
Michael Eischer
4dc71f24c5 backends: pass error logger to backends 2025-10-03 18:22:42 +02:00
Michael Eischer
13f743e26b profiling: inject os.Stderr instead of directly using it 2025-10-03 18:22:42 +02:00
Michael Eischer
3e1632c412 reduce os.stdout / os.stderr usage in tests 2025-10-03 18:22:42 +02:00
Michael Eischer
6bd85d2412 reduce usages of globalOptions variable 2025-10-03 18:22:42 +02:00
Michael Eischer
e4395a9d73 Merge pull request #5535 from restic/dependabot/github_actions/docker/login-action-3.6.0
build(deps): bump docker/login-action from 3.5.0 to 3.6.0
2025-10-03 18:21:27 +02:00
Michael Eischer
4d1f6b1fe2 Merge pull request #5536 from restic/dependabot/github_actions/actions/setup-go-6
build(deps): bump actions/setup-go from 5 to 6
2025-10-03 18:20:47 +02:00
Michael Eischer
331260e1d4 Merge pull request #5537 from restic/dependabot/go_modules/github.com/Azure/azure-sdk-for-go/sdk/azidentity-1.12.0
build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/azidentity from 1.11.0 to 1.12.0
2025-10-03 18:20:05 +02:00
Michael Eischer
eb13789b2b Merge pull request #5528 from MichaelEischer/cleanup-fatalf-usage
Cleanup fatalf usage
2025-10-01 20:17:30 +02:00
dependabot[bot]
0cd079147f build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/azidentity
Bumps [github.com/Azure/azure-sdk-for-go/sdk/azidentity](https://github.com/Azure/azure-sdk-for-go) from 1.11.0 to 1.12.0.
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/sdk-breaking-changes-guide-migration.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.11.0...sdk/azcore/v1.12.0)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azidentity
  dependency-version: 1.12.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-01 01:02:17 +00:00
dependabot[bot]
0b4b092941 build(deps): bump actions/setup-go from 5 to 6
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5 to 6.
- [Release notes](https://github.com/actions/setup-go/releases)
- [Commits](https://github.com/actions/setup-go/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/setup-go
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-01 01:01:54 +00:00
dependabot[bot]
01d3357880 build(deps): bump docker/login-action from 3.5.0 to 3.6.0
Bumps [docker/login-action](https://github.com/docker/login-action) from 3.5.0 to 3.6.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](184bdaa072...5e57cd1181)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-version: 3.6.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-10-01 01:01:51 +00:00
Michael Eischer
1c7bb15327 Merge pull request #5451 from greatroar/concurrency
Concurrency simplifications
2025-09-24 22:22:40 +02:00
Michael Eischer
d491c1bdbf use errors.Fatalf instead of custom formatting 2025-09-24 22:11:54 +02:00
Michael Eischer
97933d1404 remove trailing newlines from errors.Fatalf calls 2025-09-24 22:11:34 +02:00
Michael Eischer
4edfd36c8f Merge pull request #5363 from zmanda/fix-gh-5258-backup-exits-with-wrong-code-on-ctrl-c
bugfix: fatal errors do not keep underlying error
2025-09-24 22:04:38 +02:00
Michael Eischer
a30a36ca51 s3: drop manual credentials loading from environment
credentials.EnvAWS offers a superset of the manually implemented
credentials loading. Rework the error message that is shown if no
credentials were found but either access or secret key are set.
2025-09-24 21:02:02 +02:00
Michael Eischer
d52f92e8cc Merge pull request #5523 from tobiaskarch/patch-1
Add OpenContainers labels to Dockerfile.release
2025-09-24 20:37:55 +02:00
Michael Eischer
a4e565d921 Merge pull request #5524 from dmotte/pr-fix-backupend
internal/archiver: fixed BackupEnd when SkipIfUnchanged is true
2025-09-24 20:32:44 +02:00
Michael Eischer
ec796e6edd Merge pull request #5526 from lyallcooper/patch-1
Fix typo in rewrite command note
2025-09-24 20:31:50 +02:00
Lyall Cooper
e30acefbff Fix typo in rewrite command note 2025-09-24 14:40:35 +09:00
Michael Eischer
3e6b5c34c9 Merge pull request #5512 from ProactiveServices/patch-1
doc: mention value for pack size setting
2025-09-23 19:35:38 +02:00
dmotte
9017fefddd internal/archiver: fixed BackupEnd when SkipIfUnchanged is true 2025-09-23 03:07:30 +02:00
Michael Eischer
93d1e3b211 Merge pull request #5519 from MichaelEischer/go-1.25
CI: add go 1.25
2025-09-22 22:44:21 +02:00
Tobias Karch
8f858829ed Add OpenContainers labels to Dockerfile.release 2025-09-22 17:37:17 +00:00
Adam Piggott
db3b3e31e6 Line breaks 2025-09-22 14:24:02 +01:00
Michael Eischer
3f7121e180 backup: adapt test to changed error message 2025-09-21 22:59:59 +02:00
Michael Eischer
d5dd8ce6a7 CI: add go 1.25 2025-09-21 22:38:34 +02:00
Michael Eischer
08443fe593 Merge pull request #5405 from restic/dependabot/github_actions/golangci/golangci-lint-action-8
build(deps): bump golangci/golangci-lint-action from 6 to 8
2025-09-21 22:37:26 +02:00
Michael Eischer
daeb55a4fb Merge pull request #5511 from greatroar/atomic
ui/progress: Restore atomics in Counter
2025-09-21 22:29:40 +02:00
Michael Eischer
6ebc23543d CI: use strict matching for generated source files in golangci-lint 2025-09-21 22:25:57 +02:00
Michael Eischer
7257cd2e5f extra linters 2025-09-21 22:24:35 +02:00
Michael Eischer
88bdf20bd8 Reduce linter ignores 2025-09-21 22:24:27 +02:00
Michael Eischer
8518c1f7d9 CI: convert golangci-lint configuration to v2 2025-09-21 22:24:15 +02:00
Michael Eischer
60d80a6127 Fix linter warnings 2025-09-21 22:24:15 +02:00
Michael Eischer
575eac8d80 CI: bump golangci version to v2 2025-09-21 22:20:37 +02:00
dependabot[bot]
5c667f0501 build(deps): bump golangci/golangci-lint-action from 6 to 8
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 6 to 8.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v6...v8)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-version: '8'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-21 22:20:37 +02:00
Michael Eischer
f091e6aed0 Merge branch 'patch-release' 2025-09-21 21:20:56 +02:00
Alexander Neumann
39a737fe14 Set development version for 0.18.1 2025-09-21 20:05:01 +02:00
Alexander Neumann
7d0aa7f2e3 Add version for 0.18.1 2025-09-21 20:04:58 +02:00
Alexander Neumann
18f18b7f99 Generate CHANGELOG.md for 0.18.1 2025-09-21 20:03:56 +02:00
Alexander Neumann
426b71e3e5 Prepare changelog for 0.18.1 2025-09-21 20:03:56 +02:00
Michael Eischer
4871390a81 Merge pull request #5514 from MichaelEischer/term-ui-helper
ui: collect Quote and Truncate helpers
2025-09-21 17:03:56 +02:00
Michael Eischer
65b21e3348 ui: collect Quote and Truncate helpers
Collect ui formatting helpers in the ui package
2025-09-21 16:44:23 +02:00
Michael Eischer
4a7b122fb6 Merge pull request #5510 from MichaelEischer/termstatus-everywhere-print-functions
Replace Printf/Verbosef/Warnf with termstatus
2025-09-21 16:42:29 +02:00
Michael Eischer
86ddee8518 ui: document Message / Printer / Terminal interfaces 2025-09-21 16:32:00 +02:00
Michael Eischer
2fe271980f backup: only pass error log function to helpers 2025-09-21 16:02:59 +02:00
Michael Eischer
4f1390436d init: remove duplication from error message 2025-09-21 15:58:29 +02:00
Michael Eischer
2d7611373e ignore JSON flag for fully unsupported commands
Considering the flag would result in a mostly empty terminal output,
which is probably worse than text output instead of JSON.
2025-09-21 15:38:29 +02:00
Michael Eischer
f71278138f drop warnf 2025-09-18 22:58:23 +02:00
Michael Eischer
7d5ebdd0b3 version: convert to termstatus 2025-09-18 22:58:23 +02:00
Michael Eischer
d6c75ba2dc prune: drop unused parameter 2025-09-17 21:18:15 +02:00
Michael Eischer
2a9105c050 forget/snapshots: properly change error returned by PrintSnapshots 2025-09-17 21:16:39 +02:00
Michael Eischer
b7bb697cf7 Merge pull request #5513 from restic/more-polish-changelogs
doc: Nitpicks on changelogs
2025-09-17 20:38:19 +02:00
Michael Eischer
b12a638322 Merge pull request #5509 from restic/polish-changelogs
slightly polish changelogs
2025-09-17 20:37:46 +02:00
Leo R. Lundgren
4e0135e628 doc: Nitpicks on changelogs 2025-09-17 18:26:21 +02:00
Adam Piggott
8e87a37df0 doc: mention value for pack size setting 2025-09-16 17:32:26 +01:00
greatroar
a8f506ea4d ui/progress: Simplify Updater
Removed a defer'd call that was a bit subtle.
2025-09-16 09:56:33 +02:00
greatroar
0a1ce4f207 ui/progress: Restore atomics in Counter
We switched from atomics to a mutex in #3189 because of an alignment
bug, but the new-style atomic types don't need manual alignment.
2025-09-16 09:49:48 +02:00
Michael Eischer
364271c6c3 Consistently use withTermstatus in tests 2025-09-15 22:37:55 +02:00
Michael Eischer
6b5c8ce14e change run* functions to accept ui.Terminal instead of *termstatus.Terminal 2025-09-15 22:37:25 +02:00
Michael Eischer
5a16b29177 remove unused global output functions 2025-09-15 22:35:48 +02:00
Michael Eischer
320fb5fb98 convert repository open/create to use termstatus 2025-09-15 22:35:32 +02:00
Michael Eischer
c14cf48776 further reduce Warnf usages 2025-09-15 22:35:16 +02:00
Michael Eischer
109a211fbe convert repository locking to use termstatus 2025-09-15 22:34:59 +02:00
Michael Eischer
9d3efc2088 cleanup progress bar helpers 2025-09-15 22:34:44 +02:00
Michael Eischer
8b5dbc18ca cleanup progress bar creation special cases 2025-09-15 22:34:28 +02:00
Michael Eischer
b0eef4b965 Initialize progress printer as early as reasonable in run functions 2025-09-15 22:34:13 +02:00
Michael Eischer
6c0dccf4a5 self-update: convert to termstatus 2025-09-15 22:33:52 +02:00
Michael Eischer
6b23d0328b find: convert to termstatus 2025-09-15 22:33:41 +02:00
Michael Eischer
52f33d2d54 snapshots: convert to termstatus 2025-09-15 22:19:19 +02:00
Michael Eischer
d89535634d unlock: convert to termstatus 2025-09-15 22:19:19 +02:00
Michael Eischer
902cd1e9d6 backup: replace Verbosef usage 2025-09-15 22:19:19 +02:00
Michael Eischer
51299b8ea7 key: convert to termstatus 2025-09-15 22:19:19 +02:00
Michael Eischer
fd8f8d64f5 init: convert to termstatus 2025-09-15 22:19:17 +02:00
Michael Eischer
114cc33fe9 generate: convert to termstatus 2025-09-15 22:17:26 +02:00
Michael Eischer
44dbd4469e tag: replace global print functions with termstatus 2025-09-15 22:17:26 +02:00
Michael Eischer
d8f3e35730 prune: replace Print call with termstatus usage 2025-09-15 22:17:26 +02:00
Michael Eischer
333dbd18d8 list: convert to termstatus 2025-09-15 22:17:26 +02:00
Michael Eischer
0226e46681 cache: convert to termstatus 2025-09-15 22:17:26 +02:00
Michael Eischer
74fb43e0c2 dump: convert to termstatus 2025-09-15 22:17:26 +02:00
Michael Eischer
69186350fc diff: convert to termstatus 2025-09-15 22:17:26 +02:00
Michael Eischer
3e7aad8916 debug: convert to termstatus 2025-09-15 22:17:26 +02:00
Michael Eischer
c3912ae7bc cat: convert to termstatus 2025-09-15 22:17:26 +02:00
Michael Eischer
d3e26f2868 ls: convert to termstatus 2025-09-15 22:17:26 +02:00
Michael Eischer
2e91e81c83 mount: convert to termstatus 2025-09-15 22:17:26 +02:00
Michael Eischer
0dcd9bee88 rewrite: convert to termstatus 2025-09-15 22:17:26 +02:00
Michael Eischer
a304826b98 repair snapshots: convert to termstatus 2025-09-15 22:17:26 +02:00
Michael Eischer
8510f09225 stats: convert to termstatus 2025-09-15 22:17:26 +02:00
Michael Eischer
e63aee2ec6 copy: convert to use termstatus 2025-09-15 22:17:26 +02:00
Michael Eischer
94b19d64be termstatus: allow retrieving the underlying writer
This is intended for special cases where it must be guaranteed that the
output on stdout exactly matches what was written to the io.Writer.
2025-09-15 22:17:26 +02:00
Michael Eischer
03600ca509 termstatus: don't buffer stdout
There's not much use in doing so as nearly every write call was paired
with a flush call. Thus, just use an unbuffered writer.
2025-09-15 21:22:07 +02:00
Michael Eischer
ef9930cce4 fix capturing stdout with termstatus 2025-09-15 20:25:17 +02:00
Michael Eischer
91ecac8003 termstatus: fix crash when printing empty string 2025-09-15 20:25:17 +02:00
Michael Eischer
e9b6149303 list: cleanup parameter order of test helper 2025-09-15 20:25:17 +02:00
Michael Eischer
32b7168a9e centralize index progress bar for termstatus 2025-09-15 20:25:17 +02:00
Michael Eischer
6cdb9a75e6 consider JSON flag in newTerminalProgressPrinter 2025-09-15 20:25:17 +02:00
Michael Eischer
9ef8e13102 slightly polish changelogs 2025-09-15 19:52:24 +02:00
Michael Eischer
4940e330c0 Merge pull request #5508 from restic/patch-release-cherrypicks
Patch release cherrypicks
2025-09-15 19:51:51 +02:00
Michael Eischer
3a63430b07 extend changelog 2025-09-15 19:34:25 +02:00
Michael Eischer
a5e814bd8d check: fix error reporting on download retry 2025-09-15 19:34:25 +02:00
Michael Eischer
398862c5c8 docs: sync compatibility section with website
This is no change in policy, just a more precise description of the
status quo.
2025-09-15 19:33:39 +02:00
Michael Eischer
b47c67fd90 update dependencies 2025-09-15 19:33:20 +02:00
Michael Eischer
81fe559222 Merge pull request #5495 from MichaelEischer/fix-check-retries
check: fix error reporting on download retry
2025-09-15 19:31:44 +02:00
Michael Eischer
f21fd9d115 Merge pull request #5494 from MichaelEischer/fix-background-handling
Refactor terminal background handling
2025-09-13 22:48:11 +02:00
Michael Eischer
d757e39992 make linter happy 2025-09-13 22:22:53 +02:00
Srigovind Nayak
ce089f7e2d errors: standardize error wrapping for Fatal errors
* replace all occurences of  `errors.Fatal(err.Error())` with `errors.Fatalf("%s", err)` so that the error wrapping is correct across the codebase

* updated the review comments
2025-09-13 23:32:40 +05:30
Srigovind Nayak
576d35b37b changelog: add bugfix changelog for issue-5258 2025-09-13 23:32:40 +05:30
Srigovind Nayak
18b8f8870f tests: add tests for preserving underlying errors 2025-09-13 23:32:39 +05:30
Srigovind Nayak
79c41966af errors: enhance fatalError type to include underlying errors 2025-09-13 23:32:39 +05:30
Michael Eischer
c0a30e12b4 extend changelog 2025-09-08 11:54:29 +02:00
Michael Eischer
de29d74707 check: fix error reporting on download retry 2025-09-08 11:45:28 +02:00
Michael Eischer
424316e016 extend background handling changelog 2025-09-08 11:04:53 +02:00
Michael Eischer
b71b77fa77 terminal: unexport tcgetpgrp, tcsetpgrp and getpgrp 2025-09-08 11:04:38 +02:00
Michael Eischer
e7890d7b81 use standard line clearing in printProgress 2025-09-08 11:04:24 +02:00
Michael Eischer
529baf50f8 simplify message printing when restic receives signal 2025-09-08 11:04:11 +02:00
Michael Eischer
d10bd1d321 terminal: move reading password from terminal here 2025-09-08 11:03:56 +02:00
Michael Eischer
43b5166de8 terminal: cleanup determining width 2025-09-08 11:03:42 +02:00
Michael Eischer
0b0dd07f15 consolidate checks whether stdin/stdout is terminal 2025-09-08 11:03:26 +02:00
Michael Eischer
93ccc548c8 termstatus: move cursor handling to terminal package 2025-09-08 11:03:17 +02:00
Michael Eischer
0ab38faa2e termstatus: track current status also in background
Without this, restic could temporarily print an outdated status when
moving back into the foreground.
2025-09-08 10:50:53 +02:00
Michael Eischer
48cbbf9651 ui/termstatus: extract background handling code 2025-09-08 10:50:09 +02:00
Michael Eischer
6ff7cd9050 backend/util: extract background handling code 2025-09-08 10:42:35 +02:00
Michael Eischer
6d7e37edce Merge pull request #5491 from MichaelEischer/patch-release-cherrypicks
Patch release cherrypicks
2025-09-06 22:32:40 +02:00
dependabot[bot]
4998fd68a7 build(deps): bump docker/login-action from 3.4.0 to 3.5.0
Bumps [docker/login-action](https://github.com/docker/login-action) from 3.4.0 to 3.5.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](74a5d14239...184bdaa072)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-version: 3.5.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-06 22:07:22 +02:00
greatroar
06cc6017b8 internal/restic: Fix panic in ParseDuration
Fixes #5485. Includes test case by @MichaelEischer.
2025-09-06 22:03:12 +02:00
gregoster
37851827c5 EOPNOTSUPP can be returned if the filesystem does not support xattrs (#5344)
---------

Co-authored-by: Greg Oster <oster@netbsd.org>
2025-09-06 22:03:12 +02:00
Michael Eischer
b75f80ae5f backup: fix test on windows 2025-09-06 22:02:50 +02:00
Michael Eischer
31f87b6188 add changelog 2025-09-06 22:02:50 +02:00
Michael Eischer
b67b88a0c0 backup: test that parent directory errors can be correctly filtered 2025-09-06 22:02:50 +02:00
Michael Eischer
d57b01d6eb backup: test that missing parent directory is correctly handled 2025-09-06 22:02:50 +02:00
Michael Eischer
fc81df3f54 backup: do not fail backup is some parent folder is inaccessible
Handle errors for parent directories of backup directories in the same
way as all other file access errors during a backup.
2025-09-06 22:02:50 +02:00
Michael Eischer
73995b818a backup: do not crash if nodeFromFileInfo fails
this could crash in two cases:
- if a directory is deleted between restic stating it and trying to list
  its directory content.
- when restic tries to list the parent directory of a backup target, but
  the parent directory has been deleted.

return an error in this case instead.
2025-09-06 22:02:50 +02:00
Michael Eischer
49abea6952 add changelog 2025-09-06 21:59:54 +02:00
Dominik Schulz
f18b8ad425 Mark HTTP Error 507 as permanent
This change classifies HTTP error 507 (Insufficient Storage) as a
permanent error that should not be retried. I keep running into
this once in a while and there is literally no point in retrying when
the server is full.

Fixes #5429

Signed-off-by: Dominik Schulz <dominik.schulz@gauner.org>
2025-09-06 21:59:54 +02:00
dependabot[bot]
0a6296bfde build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
Bumps [github.com/Azure/azure-sdk-for-go/sdk/storage/azblob](https://github.com/Azure/azure-sdk-for-go) from 1.6.1 to 1.6.2.
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/go-mgmt-sdk-release-guideline.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.6.1...sdk/storage/azblob/v1.6.2)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
  dependency-version: 1.6.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-06 21:59:54 +02:00
dependabot[bot]
2403d1f139 build(deps): bump actions/checkout from 4 to 5
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-06 21:59:54 +02:00
dependabot[bot]
86a453200a build(deps): bump google.golang.org/api from 0.228.0 to 0.248.0
Bumps [google.golang.org/api](https://github.com/googleapis/google-api-go-client) from 0.228.0 to 0.248.0.
- [Release notes](https://github.com/googleapis/google-api-go-client/releases)
- [Changelog](https://github.com/googleapis/google-api-go-client/blob/main/CHANGES.md)
- [Commits](https://github.com/googleapis/google-api-go-client/compare/v0.228.0...v0.248.0)

---
updated-dependencies:
- dependency-name: google.golang.org/api
  dependency-version: 0.248.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-06 21:59:54 +02:00
dependabot[bot]
518fbbcdc2 build(deps): bump golang.org/x/crypto from 0.39.0 to 0.41.0
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.39.0 to 0.41.0.
- [Commits](https://github.com/golang/crypto/compare/v0.39.0...v0.41.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-version: 0.41.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-06 21:59:54 +02:00
y0n3d4
c62f523e6d Update 020_installation.rst removing command options
Removed command options: their use is a user choice
2025-09-06 21:59:11 +02:00
Michele Testa
91e9f65991 Update 020_installation.rst adding instruction for Gentoo Linux 2025-09-06 21:59:11 +02:00
rhhub
d839850ed4 docs: clarify ** must me between path separators 2025-09-06 21:59:11 +02:00
A Crutcher
ac051c3dcd doc: Correct Wasabi link 2025-09-06 21:59:11 +02:00
Michael Terry
20f472a67f backend/local: ignore chmod "not supported" errors 2025-09-06 21:59:11 +02:00
dependabot[bot]
7b986795de build(deps): bump golang.org/x/time from 0.11.0 to 0.12.0
Bumps [golang.org/x/time](https://github.com/golang/time) from 0.11.0 to 0.12.0.
- [Commits](https://github.com/golang/time/compare/v0.11.0...v0.12.0)

---
updated-dependencies:
- dependency-name: golang.org/x/time
  dependency-version: 0.12.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-06 21:59:11 +02:00
dependabot[bot]
4f03e03b2c build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/azidentity
Bumps [github.com/Azure/azure-sdk-for-go/sdk/azidentity](https://github.com/Azure/azure-sdk-for-go) from 1.10.0 to 1.10.1.
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/go-mgmt-sdk-release-guideline.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.10.0...sdk/azidentity/v1.10.1)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azidentity
  dependency-version: 1.10.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-06 21:59:11 +02:00
Michael Eischer
242b607bf6 walker: fix error handling if tree cannot be loaded
A tree that cannot be loaded is a fatal error when walking the tree.
Thus, return the error and exit the tree walk.
2025-09-06 21:59:11 +02:00
Michael Eischer
22bbbf42f5 Fix release note typos 2025-09-06 21:59:11 +02:00
dependabot[bot]
3c8fc9d9bc build(deps): bump github.com/peterbourgon/unixtransport
Bumps [github.com/peterbourgon/unixtransport](https://github.com/peterbourgon/unixtransport) from 0.0.4 to 0.0.6.
- [Release notes](https://github.com/peterbourgon/unixtransport/releases)
- [Commits](https://github.com/peterbourgon/unixtransport/compare/v0.0.4...v0.0.6)

---
updated-dependencies:
- dependency-name: github.com/peterbourgon/unixtransport
  dependency-version: 0.0.6
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-06 21:59:11 +02:00
dependabot[bot]
5070e62b18 build(deps): bump golang.org/x/crypto from 0.38.0 to 0.39.0
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.38.0 to 0.39.0.
- [Commits](https://github.com/golang/crypto/compare/v0.38.0...v0.39.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-version: 0.39.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-06 21:59:11 +02:00
Patrick Wolf
d64bad1a90 Update 047_tuning_backup_parameters.rst - local backend (#5355)
users would find it helpful to know how to adjust the "local" backend and they might not get the idea that the local backend is just called local... which in turn leads them to think restic is slow as they can't adjust away  from 2 threads for restore and backup.
2025-09-06 21:54:41 +02:00
Michael Eischer
6bdca9a7d5 add changelog for --stdin-filename with/directory 2025-09-06 21:54:41 +02:00
Michael Eischer
91d582a667 backup: test subdirectories in stdin filenames work 2025-09-06 21:54:41 +02:00
Michael Eischer
ef1e137e7a fs/reader: return proper error on invalid filename 2025-09-06 21:54:41 +02:00
Michael Eischer
81ac49f59d fs/reader: test file not exist case 2025-09-06 21:54:41 +02:00
Michael Eischer
ba2b0b2cc7 fs/reader: use test helpers 2025-09-06 21:54:41 +02:00
Michael Eischer
37a4235e4d fs/reader: deduplicate test code 2025-09-06 21:54:41 +02:00
Michael Eischer
04898e41d1 fs/reader: fix open+stat handling 2025-09-06 21:54:41 +02:00
Michael Eischer
07e4a78e46 fs/reader: use modification time for file and directories
This ensures that a fixed input generates a fully deterministic output
file structure.
2025-09-06 21:54:41 +02:00
Michael Eischer
236f81758e fs: rewrite Reader to build fs tree up front
This adds proper support for filenames that include directories. For
example, `/foo/bar` would result in an error when trying to open `/foo`.

The directory tree is now build upfront. This ensures let's the
directory tree construction be handled only once. All accessors then
only have to look up the constructed directory entries.
2025-09-06 21:54:41 +02:00
Ilya Grigoriev
16850c61fa docs: when describing profiling, briefly explain .pprof files 2025-09-06 21:52:57 +02:00
Ilya Grigoriev
67a572fa0d docs: document profiling options a bit better
Previously, the docs were a bit mysterious about what "enables profiling
support" means or how one could take advantage of it.
2025-09-06 21:52:57 +02:00
Ilya Grigoriev
4686a12a2d bugfix: have --{cpu,mem,...}-profile work even if Restic exits with error code (#5373)
* bugfix: write pprof file for `--{cpu,mem,...}-profile` even on error code

Before this, if `restic backup --cpu-profile dir/ backup-dir/` couldn't
read some of the input files (e.g. they weren't readable by the user
restic was running under), the `cpu.pprof` file it outputs would be
empty.

https://github.com/spf13/cobra/issues/1893

* drop changelog as it's not relevant for end users

---------

Co-authored-by: Michael Eischer <michael.eischer@fau.de>
2025-09-06 21:52:57 +02:00
dependabot[bot]
4dbed5f905 build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
Bumps [github.com/Azure/azure-sdk-for-go/sdk/storage/azblob](https://github.com/Azure/azure-sdk-for-go) from 1.6.0 to 1.6.1.
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/release.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.6.0...sdk/azcore/v1.6.1)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
  dependency-version: 1.6.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-06 21:52:57 +02:00
dependabot[bot]
d708c5ea73 build(deps): bump github.com/Azure/azure-sdk-for-go/sdk/azidentity
Bumps [github.com/Azure/azure-sdk-for-go/sdk/azidentity](https://github.com/Azure/azure-sdk-for-go) from 1.8.2 to 1.10.0.
- [Release notes](https://github.com/Azure/azure-sdk-for-go/releases)
- [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/go-mgmt-sdk-release-guideline.md)
- [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azidentity/v1.8.2...sdk/azcore/v1.10.0)

---
updated-dependencies:
- dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azidentity
  dependency-version: 1.10.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-06 21:52:57 +02:00
Mark Lopez
ee0cb7d1aa docs: updated installation docs for Windows 2025-09-06 21:52:57 +02:00
dependabot[bot]
590dc82719 build(deps): bump golang.org/x/sys from 0.31.0 to 0.33.0
Bumps [golang.org/x/sys](https://github.com/golang/sys) from 0.31.0 to 0.33.0.
- [Commits](https://github.com/golang/sys/compare/v0.31.0...v0.33.0)

---
updated-dependencies:
- dependency-name: golang.org/x/sys
  dependency-version: 0.33.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-06 21:52:57 +02:00
Samuel Chambers
72d70d94f9 updated doc/faq.rst_commitsSquashed 2025-09-06 21:52:57 +02:00
Markus Hansmair
aaa48e765a doc: typo & minor rewording in 'Removing files from snapshots' 2025-09-06 21:50:54 +02:00
Michael Eischer
f61cf4a1e5 docs: fix typos in developer information (#5329) 2025-09-06 21:50:54 +02:00
Michael Eischer
a22b9d5735 update direct dependencies (#5340) 2025-09-06 21:50:54 +02:00
dependabot[bot]
e9ae67c968 build(deps): bump docker/login-action from 3.3.0 to 3.4.0 (#5333)
Bumps [docker/login-action](https://github.com/docker/login-action) from 3.3.0 to 3.4.0.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](9780b0c442...74a5d14239)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-version: 3.4.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-06 21:50:54 +02:00
Mohammad Javad Naderi
1fe6fbc4b8 doc: fix typos 2025-09-06 21:50:54 +02:00
Gilbert Gilb's
3d4fb876f4 docs: fix unit for S3 restore timeout
"d" is not a valid unit.
2025-09-06 21:50:54 +02:00
Michael Eischer
5d182ed1ab forget: fix ignored RESTIC_HOST environment variable 2025-09-06 21:50:54 +02:00
greatroar
f7f6459eb9 internal/restic: Simplify ParallelRemove 2025-07-19 12:55:40 +02:00
greatroar
95a36b55f4 internal/dump: Clarify writeNode concurrency 2025-07-19 12:54:41 +02:00
greatroar
2c39b1f84f internal/repository/index: Simplify MasterIndex concurrency 2025-07-18 15:06:37 +02:00
454 changed files with 10676 additions and 7040 deletions

12
.dockerignore Normal file
View File

@@ -0,0 +1,12 @@
# Actual layer caching is impossible due to .git, but
# that must be included for provenance reasons. These ignores
# are strictly for hygenic build.
*
!/*.go
!/go.*
!/cmd/*
!/docker/entrypoint.sh
!/internal/*
!/helpers/*
!/VERSION
!/.git/

View File

@@ -36,7 +36,7 @@ Please always follow these steps:
- Format all commit messages in the same style as [the other commits in the repository](https://github.com/restic/restic/blob/master/CONTRIBUTING.md#git-commits).
-->
- [ ] I have added tests for all code changes.
- [ ] I have added tests for all code changes, see [writing tests](https://restic.readthedocs.io/en/stable/090_participating.html#writing-tests)
- [ ] I have added documentation for relevant changes (in the manual).
- [ ] There's a new file in `changelog/unreleased/` that describes the changes for our users (see [template](https://github.com/restic/restic/blob/master/changelog/TEMPLATE)).
- [ ] I'm done! This pull request is ready for review.

View File

@@ -5,6 +5,10 @@ updates:
directory: "/" # Location of package manifests
schedule:
interval: "monthly"
groups:
golang-x-deps:
patterns:
- "golang.org/x/*"
# Dependencies listed in .github/workflows/*.yml
- package-ecosystem: "github-actions"

View File

@@ -26,10 +26,10 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v5
uses: actions/checkout@v6
- name: Log in to the Container registry
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1
uses: docker/login-action@c94ce9fb468520275223c153574b00df6fe4bcc9
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}

View File

@@ -13,7 +13,7 @@ permissions:
contents: read
env:
latest_go: "1.24.x"
latest_go: "1.25.x"
GO111MODULE: on
jobs:
@@ -23,29 +23,29 @@ jobs:
# list of jobs to run:
include:
- job_name: Windows
go: 1.24.x
go: 1.25.x
os: windows-latest
- job_name: macOS
go: 1.24.x
go: 1.25.x
os: macOS-latest
test_fuse: false
- job_name: Linux
go: 1.24.x
go: 1.25.x
os: ubuntu-latest
test_cloud_backends: true
test_fuse: true
check_changelog: true
- job_name: Linux (race)
go: 1.24.x
go: 1.25.x
os: ubuntu-latest
test_fuse: true
test_opts: "-race"
- job_name: Linux
go: 1.23.x
go: 1.24.x
os: ubuntu-latest
test_fuse: true
@@ -57,10 +57,10 @@ jobs:
steps:
- name: Check out code
uses: actions/checkout@v5
uses: actions/checkout@v6
- name: Set up Go ${{ matrix.go }}
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version: ${{ matrix.go }}
@@ -220,10 +220,10 @@ jobs:
steps:
- name: Check out code
uses: actions/checkout@v5
uses: actions/checkout@v6
- name: Set up Go ${{ env.latest_go }}
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version: ${{ env.latest_go }}
@@ -242,18 +242,18 @@ jobs:
checks: write
steps:
- name: Check out code
uses: actions/checkout@v5
uses: actions/checkout@v6
- name: Set up Go ${{ env.latest_go }}
uses: actions/setup-go@v5
uses: actions/setup-go@v6
with:
go-version: ${{ env.latest_go }}
- name: golangci-lint
uses: golangci/golangci-lint-action@v6
uses: golangci/golangci-lint-action@v9
with:
# Required: the version of golangci-lint is required and must be specified without patch version: we always use the latest patch version.
version: v1.64.8
version: v2.4.0
args: --verbose --timeout 5m
# only run golangci-lint for pull requests, otherwise ALL hints get
@@ -287,7 +287,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v5
uses: actions/checkout@v6
- name: Docker meta
id: meta

View File

@@ -1,70 +1,95 @@
# This is the configuration for golangci-lint for the restic project.
#
# A sample config with all settings is here:
# https://github.com/golangci/golangci-lint/blob/master/.golangci.example.yml
version: "2"
linters:
# only enable the linters listed below
disable-all: true
default: none
enable:
- asciicheck
# ensure that http response bodies are closed
- bodyclose
# restrict imports from other restic packages for internal/backend (cache exempt)
- depguard
- copyloopvar
# make sure all errors returned by functions are handled
- errcheck
# show how code can be simplified
- gosimple
# make sure code is formatted
- gofmt
# examine code and report suspicious constructs, such as Printf calls whose
# arguments do not align with the format string
- govet
# make sure names and comments are used according to the conventions
- revive
# consistent imports
- importas
# detect when assignments to existing variables are not used
- ineffassign
- nolintlint
# make sure names and comments are used according to the conventions
- revive
# run static analysis and find errors
- staticcheck
# find unused variables, functions, structs, types, etc.
- unused
# parse and typecheck code
- typecheck
# ensure that http response bodies are closed
- bodyclose
- importas
issues:
# don't use the default exclude rules, this hides (among others) ignored
# errors from Close() calls
exclude-use-default: false
# list of things to not warn about
exclude:
# revive: do not warn about missing comments for exported stuff
- exported (function|method|var|type|const) .* should have comment or be unexported
# revive: ignore constants in all caps
- don't use ALL_CAPS in Go names; use CamelCase
# revive: lots of packages don't have such a comment
- "package-comments: should have a package comment"
# staticcheck: there's no easy way to replace these packages
- "SA1019: \"golang.org/x/crypto/poly1305\" is deprecated"
- "SA1019: \"golang.org/x/crypto/openpgp\" is deprecated"
- "redefines-builtin-id:"
exclude-rules:
# revive: ignore unused parameters in tests
- path: (_test\.go|testing\.go|backend/.*/tests\.go)
text: "unused-parameter:"
linters-settings:
importas:
alias:
- pkg: github.com/restic/restic/internal/test
alias: rtest
settings:
depguard:
rules:
# Prevent backend packages from importing the internal/restic package to keep the architectural layers intact.
backend-imports:
files:
- "**/internal/backend/**"
- "!**/internal/backend/cache/**"
- "!**/internal/backend/test/**"
- "!**/*_test.go"
deny:
- pkg: "github.com/restic/restic/internal/restic"
desc: "internal/restic should not be imported to keep the architectural layers intact"
- pkg: "github.com/restic/restic/internal/repository"
desc: "internal/repository should not be imported to keep the architectural layers intact"
importas:
alias:
- pkg: github.com/restic/restic/internal/test
alias: rtest
staticcheck:
checks:
# default
- "all"
- "-ST1000"
- "-ST1003"
- "-ST1016"
- "-ST1020"
- "-ST1021"
- "-ST1022"
# extra disables
- "-QF1008" # don't warn about specifing name of embedded field on access
exclusions:
rules:
# revive: ignore unused parameters in tests
- path: (_test\.go|testing\.go|backend/.*/tests\.go)
text: "unused-parameter:"
# revive: do not warn about missing comments for exported stuff
- path: (.+)\.go$
text: exported (function|method|var|type|const) .* should have comment or be unexported
# revive: ignore constants in all caps
- path: (.+)\.go$
text: don't use ALL_CAPS in Go names; use CamelCase
# revive: lots of packages don't have such a comment
- path: (.+)\.go$
text: "package-comments: should have a package comment"
# staticcheck: there's no easy way to replace these packages
- path: (.+)\.go$
text: 'SA1019: "golang.org/x/crypto/poly1305" is deprecated'
- path: (.+)\.go$
text: 'SA1019: "golang.org/x/crypto/openpgp" is deprecated'
- path: (.+)\.go$
text: "redefines-builtin-id:"
# revive: collection of helpers to implement a backend, more descriptive names would be too repetitive
- path: internal/backend/util/.*.go$
text: "var-naming: avoid meaningless package names"
paths:
- third_party$
- builtin$
- examples$
formatters:
enable:
# make sure code is formatted
- gofmt
exclusions:
paths:
- third_party$
- builtin$
- examples$

View File

@@ -1,5 +1,6 @@
# Table of Contents
* [Changelog for 0.18.1](#changelog-for-restic-0181-2025-09-21)
* [Changelog for 0.18.0](#changelog-for-restic-0180-2025-03-27)
* [Changelog for 0.17.3](#changelog-for-restic-0173-2024-11-08)
* [Changelog for 0.17.2](#changelog-for-restic-0172-2024-10-27)
@@ -39,6 +40,106 @@
* [Changelog for 0.6.0](#changelog-for-restic-060-2017-05-29)
# Changelog for restic 0.18.1 (2025-09-21)
The following sections list the changes in restic 0.18.1 relevant to
restic users. The changes are ordered by importance.
## Summary
* Fix #5324: Correctly handle `backup --stdin-filename` with directory paths
* Fix #5325: Accept `RESTIC_HOST` environment variable in `forget` command
* Fix #5342: Ignore "chmod not supported" errors when writing files
* Fix #5344: Ignore `EOPNOTSUPP` errors for extended attributes
* Fix #5421: Fix rare crash if directory is removed during backup
* Fix #5429: Stop retrying uploads when rest-server runs out of space
* Fix #5467: Improve handling of download retries in `check` command
## Details
* Bugfix #5324: Correctly handle `backup --stdin-filename` with directory paths
In restic 0.18.0, the `backup` command failed if a filename that includes at
least a directory was passed to `--stdin-filename`. For example,
`--stdin-filename /foo/bar` resulted in the following error:
```
Fatal: unable to save snapshot: open /foo: no such file or directory
```
This has now been fixed.
https://github.com/restic/restic/issues/5324
https://github.com/restic/restic/pull/5356
* Bugfix #5325: Accept `RESTIC_HOST` environment variable in `forget` command
The `forget` command did not use the host name from the `RESTIC_HOST`
environment variable when filtering snapshots. This has now been fixed.
https://github.com/restic/restic/issues/5325
https://github.com/restic/restic/pull/5327
* Bugfix #5342: Ignore "chmod not supported" errors when writing files
Restic 0.18.0 introduced a bug that caused `chmod xxx: operation not supported`
errors to appear when writing to a local file repository that did not support
chmod (like CIFS or WebDAV mounted via FUSE). Restic now ignores those errors.
https://github.com/restic/restic/issues/5342
* Bugfix #5344: Ignore `EOPNOTSUPP` errors for extended attributes
Restic 0.18.0 added extended attribute support for NetBSD 10+, but not all
NetBSD filesystems support extended attributes. Other BSD systems can likewise
return `EOPNOTSUPP`, so restic now ignores these errors.
https://github.com/restic/restic/issues/5344
* Bugfix #5421: Fix rare crash if directory is removed during backup
In restic 0.18.0, the `backup` command could crash if a directory was removed
between reading its metadata and listing its directory content. This has now
been fixed.
https://github.com/restic/restic/pull/5421
* Bugfix #5429: Stop retrying uploads when rest-server runs out of space
When rest-server returns a `507 Insufficient Storage` error, it indicates that
no more storage capacity is available. Restic now correctly stops retrying
uploads in this case.
https://github.com/restic/restic/issues/5429
https://github.com/restic/restic/pull/5452
* Bugfix #5467: Improve handling of download retries in `check` command
In very rare cases, the `check` command could unnecessarily report repository
damage if the backend returned incomplete, corrupted data on the first download
try which is afterwards resolved by a download retry.
This could result in an error output like the following:
```
Load(<data/34567890ab>, 33918928, 0) returned error, retrying after 871.35598ms: readFull: unexpected EOF
Load(<data/34567890ab>, 33918928, 0) operation successful after 1 retries
check successful on second attempt, original error pack 34567890ab[...] contains 6 errors: [blob 12345678[...]: decrypting blob <data/12345678> from 34567890 failed: ciphertext verification failed ...]
[...]
Fatal: repository contains errors
```
This fix only applies to a very specific case where the log shows `operation
successful after 1 retries` followed by a `check successful on second attempt,
original error` that only reports `ciphertext verification failed` errors in the
pack file. If any other errors are reported in the pack file, then the
repository still has to be considered as damaged.
Now, only the check result of the last download retry is reported as intended.
https://github.com/restic/restic/issues/5467
https://github.com/restic/restic/pull/5495
# Changelog for restic 0.18.0 (2025-03-27)
The following sections list the changes in restic 0.18.0 relevant to
restic users. The changes are ordered by importance.

View File

@@ -202,6 +202,9 @@ we'll be glad to assist. Having a PR with failing integration tests is nothing
to be ashamed of. In contrast, that happens regularly for all of us. That's
what the tests are there for.
More details of how to structure tests can be found here at
[writing tests](https://restic.readthedocs.io/en/stable/090_participating.html#writing-tests).
Git Commits
-----------

View File

@@ -1 +1 @@
0.18.0-dev
0.18.1-dev

View File

@@ -36,7 +36,6 @@
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//go:build ignore_build_go
// +build ignore_build_go
package main
@@ -60,7 +59,7 @@ var config = Config{
// see https://github.com/googleapis/google-cloud-go/issues/11448
DefaultBuildTags: []string{"selfupdate", "disable_grpc_modules"}, // specify build tags which are always used
Tests: []string{"./..."}, // tests to run
MinVersion: GoVersion{Major: 1, Minor: 23, Patch: 0}, // minimum Go version supported
MinVersion: GoVersion{Major: 1, Minor: 24, Patch: 0}, // minimum Go version supported
}
// Config configures the build.

View File

@@ -1,14 +1,14 @@
Bugfix: Correctly handle `backup --stdin-filename` with directories
Bugfix: Correctly handle `backup --stdin-filename` with directory paths
In restic 0.18.0, the `backup` command failed if a filename that includes
a least a directory was passed to `--stdin-filename`. For example,
at least a directory was passed to `--stdin-filename`. For example,
`--stdin-filename /foo/bar` resulted in the following error:
```
Fatal: unable to save snapshot: open /foo: no such file or directory
```
This has been fixed now.
This has now been fixed.
https://github.com/restic/restic/issues/5324
https://github.com/restic/restic/pull/5356

View File

@@ -1,7 +1,7 @@
Bugfix: Correctly handle `RESTIC_HOST` in `forget` command
Bugfix: Accept `RESTIC_HOST` environment variable in `forget` command
The `forget` command did not use the host name from the `RESTIC_HOST`
environment variable. This has been fixed.
environment variable when filtering snapshots. This has now been fixed.
https://github.com/restic/restic/issues/5325
https://github.com/restic/restic/pull/5327

View File

@@ -1,6 +1,6 @@
Bugfix: Ignore "chmod not supported" errors when writing files
Restic 0.18.0 introduced a bug that caused "chmod xxx: operation not supported"
Restic 0.18.0 introduced a bug that caused `chmod xxx: operation not supported`
errors to appear when writing to a local file repository that did not support
chmod (like CIFS or WebDAV mounted via FUSE). Restic now ignores those errors.

View File

@@ -0,0 +1,7 @@
Bugfix: Ignore `EOPNOTSUPP` errors for extended attributes
Restic 0.18.0 added extended attribute support for NetBSD 10+, but not all
NetBSD filesystems support extended attributes. Other BSD systems can
likewise return `EOPNOTSUPP`, so restic now ignores these errors.
https://github.com/restic/restic/issues/5344

View File

@@ -0,0 +1,8 @@
Bugfix: Stop retrying uploads when rest-server runs out of space
When rest-server returns a `507 Insufficient Storage` error, it indicates
that no more storage capacity is available. Restic now correctly stops
retrying uploads in this case.
https://github.com/restic/restic/issues/5429
https://github.com/restic/restic/pull/5452

View File

@@ -0,0 +1,27 @@
Bugfix: Improve handling of download retries in `check` command
In very rare cases, the `check` command could unnecessarily report repository
damage if the backend returned incomplete, corrupted data on the first download
try which is afterwards resolved by a download retry.
This could result in an error output like the following:
```
Load(<data/34567890ab>, 33918928, 0) returned error, retrying after 871.35598ms: readFull: unexpected EOF
Load(<data/34567890ab>, 33918928, 0) operation successful after 1 retries
check successful on second attempt, original error pack 34567890ab[...] contains 6 errors: [blob 12345678[...]: decrypting blob <data/12345678> from 34567890 failed: ciphertext verification failed ...]
[...]
Fatal: repository contains errors
```
This fix only applies to a very specific case where the log shows
`operation successful after 1 retries` followed by a
`check successful on second attempt, original error` that only reports
`ciphertext verification failed` errors in the pack file. If any other errors
are reported in the pack file, then the repository still has to be considered
as damaged.
Now, only the check result of the last download retry is reported as intended.
https://github.com/restic/restic/issues/5467
https://github.com/restic/restic/pull/5495

View File

@@ -1,8 +1,7 @@
Bugfix: Fix rare crash if directory is removed during backup
In restic 0.18.0, the `backup` command could crash if a directory is removed
inbetween reading its metadata and listing its directory content.
This has been fixed.
In restic 0.18.0, the `backup` command could crash if a directory was removed
between reading its metadata and listing its directory content. This has now
been fixed.
https://github.com/restic/restic/pull/5421

View File

@@ -0,0 +1,9 @@
Enhancement: `restic check` for specified snapshot(s) via snapshot filtering
Snapshots can now be specified for the command `restic check` on the command line
via the standard snapshot filter, (`--tag`, `--host`, `--path` or specifying
snapshot IDs directly) and will be used for checking the packfiles used by these snapshots.
https://github.com/restic/restic/issues/3326
https://github.com/restic/restic/pull/5469
https://github.com/restic/restic/pull/5644

View File

@@ -0,0 +1,9 @@
Enhancement: Support restoring ownership by name on UNIX systems
Restic restore used to restore file ownership on UNIX systems by UID and GID.
It now allows restoring the file ownership by user name and group name with `--ownership-by-name`.
This allows restoring snapshots on a system where the UID/GID are not the same as they were on the system where the snapshot was created.
However it does not include support for POSIX ACLs, which are still restored by their numeric value.
https://github.com/restic/restic/issues/3572
https://github.com/restic/restic/pull/5449

View File

@@ -0,0 +1,8 @@
Enhancement: Allow Github personal access token to be specified for `self-update`
`restic self-update` previously only used unauthenticated GitHub API requests when checking for the latest release. This caused some users sharing IP addresses to hit the GitHub rate limit, resulting in a 403 Forbidden error and preventing updates.
Restic still uses unauthenticated requests by default, but it now optionally supports authenticated GitHub API requests during `self-update`. Users can set the `$GITHUB_ACCESS_TOKEN` environment variable to use a [personal access token](https://github.com/settings/tokens) for this effect, avoiding update failures due to rate limiting.
https://github.com/restic/restic/issues/3738
https://github.com/restic/restic/pull/5568

View File

@@ -0,0 +1,12 @@
Enhancement: Support include filters in `rewrite` command
The enhancement enables the standard include filter options
--iinclude pattern same as --include pattern but ignores the casing of filenames
--iinclude-file file same as --include-file but ignores casing of filenames in patterns
-i, --include pattern include a pattern (can be specified multiple times)
--include-file file read include patterns from a file (can be specified multiple times)
The exclusion or inclusion of filter parameters is exclusive, as in other commands.
https://github.com/restic/restic/issues/4278
https://github.com/restic/restic/pull/5191

View File

@@ -0,0 +1,11 @@
Bugfix: Exit with code 3 when some `backup` source files do not exist
Restic used to exit with code 0 even when some backup sources did not exist. Restic
would exit with code 3 only when child directories or files did not exist. This
could cause confusion and unexpected behavior in scripts that relied on the exit
code to determine if the backup was successful.
Restic now exits with code 3 when some backup sources do not exist.
https://github.com/restic/restic/issues/4467
https://github.com/restic/restic/pull/5347

View File

@@ -0,0 +1,7 @@
Bugfix: Exit with correct code on SIGINT
Restic previously returned exit code 1 on SIGINT, which is incorrect.
Restic now returns 130 on SIGINT.
https://github.com/restic/restic/issues/5258
https://github.com/restic/restic/pull/5363

View File

@@ -0,0 +1,7 @@
Bugfix: `restic find` now checks for correct ordering of time related options
`restic find` now immediately fails with an error if both `--oldest` and `--newest` are specified
and `--oldest` is a timestamp after `--newest`.
https://github.com/restic/restic/issues/5280
https://github.com/restic/restic/pull/5310

View File

@@ -1,7 +0,0 @@
Bugfix: Ignore EOPNOTSUPP as an error for xattr
Restic 0.18.0 added xattr support for NetBSD 10+, but not all NetBSD
filesystems support xattrs. Other BSD systems can likewise return
EOPNOTSUPP, so restic now simply ignores EOPNOTSUPP errors for xattrs.
https://github.com/restic/restic/issues/5344

View File

@@ -0,0 +1,11 @@
Enhancement: Add support for --exclude-cloud-files on macOS (e.g. iCloud drive)
Restic treated files stored in iCloud drive as though they were regular files.
This caused restic to download all files (including files marked as cloud only) while iterating over them.
Restic now allows the user to exclude these files when backing up with the `--exclude-cloud-files` option.
Works from Sonoma (macOS 14.0) onwards. Older macOS versions materialize files when `stat` is called on the file.
https://github.com/restic/restic/pull/4990
https://github.com/restic/restic/issues/5352

View File

@@ -10,3 +10,5 @@ This has been fixed.
https://github.com/restic/restic/issues/5354
https://github.com/restic/restic/pull/5358
https://github.com/restic/restic/pull/5493
https://github.com/restic/restic/pull/5494

View File

@@ -0,0 +1,10 @@
Enhancement: Reduce progress bar refresh rates to reduce energy usage
Progress bars were updated with 60fps which can cause high CPU or GPU usage
for some terminal emulators. Reduce it to 10fps to conserve energy.
In addition, this lower frequency seem to be necessary to allow selecting
anything in the terminal with certain terminal emulators.
https://github.com/restic/restic/issues/5383
https://github.com/restic/restic/pull/5551
https://github.com/restic/restic/pull/5626

View File

@@ -1,8 +0,0 @@
Bugfix: do not retry if rest-server runs out of space
Rest-server return error `507 Insufficient Storage` if no more storage
capacity is available at the server. Restic now no longer retries uploads
in this case.
https://github.com/restic/restic/issues/5429
https://github.com/restic/restic/pull/5452

View File

@@ -0,0 +1,12 @@
Enhancement: Allow overriding RESTIC_HOST environment variable with --host flag
When the `RESTIC_HOST` environment variable was set, there was no way to list or
operate on snapshots from all hosts, as the environment variable would always
filter to that specific host. Restic now allows overriding `RESTIC_HOST` by
explicitly providing the `--host` flag with an empty string (e.g., `--host=""` or
`--host=`), which will show snapshots from all hosts. This works for all commands
that support snapshot filtering: `snapshots`, `forget`, `find`, `stats`, `copy`,
`tag`, `repair snapshots`, `rewrite`, `mount`, `restore`, `dump`, and `ls`.
https://github.com/restic/restic/issues/5440
https://github.com/restic/restic/pull/5541

View File

@@ -0,0 +1,10 @@
Enhancement: `copy` copies snapshots in batches
The `copy` command used to copy snapshots individually, even if this resulted in creating pack files
smaller than the target pack size. In particular, this resulted in many small files
when copying small incremental snapshots.
Now, `copy` copies multiple snapshots at once to avoid creating small files.
https://github.com/restic/restic/issues/5175
https://github.com/restic/restic/pull/5464

View File

@@ -0,0 +1,7 @@
Bugfix: Password prompt was sometimes not shown
The password prompt for a repository was sometimes not shown when running
the `backup -v` command. This has been fixed.
https://github.com/restic/restic/issues/5477
https://github.com/restic/restic/pull/5554

View File

@@ -0,0 +1,8 @@
Bugfix: Mark files as readonly when using the SFTP backend
Files created by the SFTP backend previously allowed writes to those files.
Restic now restricts the file permissions on SFTP backend to readonly.
This change only has an effect for sftp servers with support for the chmod operation.
https://github.com/restic/restic/issues/5487
https://github.com/restic/restic/pull/5497

View File

@@ -0,0 +1,15 @@
Enhancement: Reduce Azure storage costs by optimizing upload method
Restic previously used Azure's PutBlock and PutBlockList APIs for all file
uploads, which resulted in two transactions per file and doubled the storage
operation costs. For backups with many pack files, this could lead to
significant Azure storage transaction fees.
Restic now uses the more efficient PutBlob API for files up to 256 MiB,
requiring only a single transaction per file. This reduces Azure storage
operation costs by approximately 50% for typical backup workloads. Files
larger than 256 MiB continue to use the block-based upload method as required
by Azure's API limits.
https://github.com/restic/restic/issues/5531
https://github.com/restic/restic/pull/5544

View File

@@ -0,0 +1,7 @@
Bugfix: correctly handle `snapshots --group-by` in combination with `--latest`
For the `snapshots` command, the `--latest` option did not correctly handle the
case where an non-default value was passed to `--group-by`. This has been fixed.
https://github.com/restic/restic/issues/5586
https://github.com/restic/restic/pull/5601

View File

@@ -0,0 +1,8 @@
Bugfix: Fix "chmod not supported" errors when unlocking
Restic 0.18.0 introduced a bug that caused "chmod xxx: operation not supported"
errors to appear when unlocking with a stale lock, on a local file repository
that did not support chmod (like CIFS or WebDAV mounted via FUSE). Restic now
just doesn't bother calling chmod in that case on Unix, as it is unnecessary.
https://github.com/restic/restic/issues/5595

View File

@@ -0,0 +1,5 @@
Change: Update dependencies and require Go 1.24 or newer
We have updated all dependencies. Restic now requires Go 1.24 or newer to build.
https://github.com/restic/restic/pull/5619

View File

@@ -0,0 +1,9 @@
Enhancement: add more status counters to `restic copy`
`restic copy` now produces more status counters in text format. The new counters
are the number of blobs to copy, their size on disk and the number of packfiles
used from the source repository. The additional statistics is only produced when
the `--verbose` option is specified.
https://github.com/restic/restic/issues/5175
https://github.com/restic/restic/pull/5319

View File

@@ -0,0 +1,11 @@
Enhancement: Enable file system privileges on Windows before access
Restic attempted to enable Windows file system privileges when
reading or writing security descriptors - after potentially being wholly
denied access to previous items. It also read file extended attributes without
using the privilege, possibly missing them and producing errors.
Restic now attempts to enable all file system privileges before any file
access. It also requests extended attribute reads use the backup privilege.
https://github.com/restic/restic/pull/5424

View File

@@ -0,0 +1,11 @@
Enhancement: Allow nice and ionice configuration for restic containers
The official restic docker now supports the following environment variables:
`NICE`: set the desired nice scheduling. See `man nice`.
`IONICE_CLASS`: set the desired I/O scheduling class. See `man ionice`. Note that real time support requires the invoker to manually add the `SYS_NICE` capability.
`IONICE_PRIORITY`: set the prioritization for ionice in the given `IONICE_CLASS`. This does nothing without `IONICE_CLASS`, but defaults to `4` (no priority, no penalties).
See https://restic.readthedocs.io/en/stable/020_installation.html#docker-container for further details.
https://github.com/restic/restic/pull/5448

View File

@@ -0,0 +1,10 @@
Bugfix: Correctly restore ACL inheritance state on Windows
Since the introduction of Security Descriptor backups in restic 0.17.0, the inheritance property of Access Control Entries (ACEs) was not restored correctly. This resulted in all restored permissions being marked as explicit (IsInherited: False), even if they were originally inherited from a parent folder.
The issue was caused by sending conflicting inheritance flags (PROTECTED_... and UNPROTECTED_...) to the Windows API during the restore process. The API would default to the more restrictive PROTECTED state, effectively disabling inheritance.
This has been fixed by ensuring that only the correct, non-conflicting inheritance flag is used when applying the security descriptor, preserving the original permission structure from the backup.
https://github.com/restic/restic/pull/5465
https://github.com/restic/restic/issues/5427

View File

@@ -0,0 +1,6 @@
Enhancement: Add OpenContainers labels to Dockerfile.release
The restic Docker image now includes labels from the OpenContainers Annotations Spec.
This information can be used by third party services.
https://github.com/restic/restic/pull/5523

View File

@@ -0,0 +1,10 @@
Enhancement: Display timezone information in snapshots output
The `snapshots` command now displays which timezone is being used to show
timestamps. Since snapshots can be created in different timezones but are
always displayed in the local timezone, a footer line is now shown indicating
the timezone used for display (e.g., "Timestamps shown in CET timezone").
This helps prevent confusion when comparing snapshots in a multi-user
environment.
https://github.com/restic/restic/pull/5588

View File

@@ -0,0 +1,7 @@
Bugfix: Return error if `RESTIC_PACK_SIZE` contains invalid value
If the environment variable `RESTIC_PACK_SIZE` could not be parsed, then
restic ignored its value. Now, the restic commands fail with an error, unless
the command-line option `--pack-size` was specified.
https://github.com/restic/restic/pull/5592

View File

@@ -0,0 +1,7 @@
Enhancement: reduce memory usage of check/copy/diff/stats commands
We have optimized the memory usage of the `check`, `copy`, `diff` and
`stats` commands. These now require less memory when processing large
snapshots.
https://github.com/restic/restic/pull/5610

View File

@@ -0,0 +1,9 @@
Enhancement: stricter early mountpoint validation in `mount`
`restic mount` accepted parameters that would lead to a FUSE mount operation
failing after having done computationally intensive work to prepare the mount.
The `mountpoint` argument supplied must now refer to the name of a directory
that the current user can access and write to, otherwise `restic mount` will
exit with an error before interacting with the repository.
https://github.com/restic/restic/pull/5718

View File

@@ -2,6 +2,8 @@ package main
import (
"context"
"fmt"
"io"
"os"
"os/signal"
"syscall"
@@ -9,26 +11,27 @@ import (
"github.com/restic/restic/internal/debug"
)
func createGlobalContext() context.Context {
func createGlobalContext(stderr io.Writer) context.Context {
ctx, cancel := context.WithCancel(context.Background())
ch := make(chan os.Signal, 1)
go cleanupHandler(ch, cancel)
go cleanupHandler(ch, cancel, stderr)
signal.Notify(ch, syscall.SIGINT, syscall.SIGTERM)
return ctx
}
// cleanupHandler handles the SIGINT and SIGTERM signals.
func cleanupHandler(c <-chan os.Signal, cancel context.CancelFunc) {
func cleanupHandler(c <-chan os.Signal, cancel context.CancelFunc, stderr io.Writer) {
s := <-c
debug.Log("signal %v received, cleaning up", s)
Warnf("%ssignal %v received, cleaning up\n", clearLine(0), s)
// ignore error as there's no good way to handle it
_, _ = fmt.Fprintf(stderr, "\rsignal %v received, cleaning up \n", s)
if val, _ := os.LookupEnv("RESTIC_DEBUG_STACKTRACE_SIGINT"); val != "" {
_, _ = os.Stderr.WriteString("\n--- STACKTRACE START ---\n\n")
_, _ = os.Stderr.WriteString(debug.DumpStacktrace())
_, _ = os.Stderr.WriteString("\n--- STACKTRACE END ---\n")
_, _ = stderr.Write([]byte("\n--- STACKTRACE START ---\n\n"))
_, _ = stderr.Write([]byte(debug.DumpStacktrace()))
_, _ = stderr.Write([]byte("\n--- STACKTRACE END ---\n"))
}
cancel()

View File

@@ -19,19 +19,20 @@ import (
"golang.org/x/sync/errgroup"
"github.com/restic/restic/internal/archiver"
"github.com/restic/restic/internal/data"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/filter"
"github.com/restic/restic/internal/fs"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/textfile"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/ui/backup"
"github.com/restic/restic/internal/ui/termstatus"
)
func newBackupCommand() *cobra.Command {
func newBackupCommand(globalOptions *global.Options) *cobra.Command {
var opts BackupOptions
cmd := &cobra.Command{
@@ -64,9 +65,7 @@ Exit status is 12 if the password is incorrect.
GroupID: cmdGroupDefault,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
term, cancel := setupTermstatus()
defer cancel()
return runBackup(cmd.Context(), opts, globalOptions, term, args)
return runBackup(cmd.Context(), opts, *globalOptions, globalOptions.Term, args)
},
}
@@ -79,7 +78,7 @@ type BackupOptions struct {
filter.ExcludePatternOptions
Parent string
GroupBy restic.SnapshotGroupByOptions
GroupBy data.SnapshotGroupByOptions
Force bool
ExcludeOtherFS bool
ExcludeIfPresent []string
@@ -89,7 +88,7 @@ type BackupOptions struct {
Stdin bool
StdinFilename string
StdinCommand bool
Tags restic.TagLists
Tags data.TagLists
Host string
FilesFrom []string
FilesFromVerbatim []string
@@ -107,7 +106,7 @@ type BackupOptions struct {
func (opts *BackupOptions) AddFlags(f *pflag.FlagSet) {
f.StringVar(&opts.Parent, "parent", "", "use this parent `snapshot` (default: latest snapshot in the group determined by --group-by and not newer than the timestamp determined by --time)")
opts.GroupBy = restic.SnapshotGroupByOptions{Host: true, Path: true}
opts.GroupBy = data.SnapshotGroupByOptions{Host: true, Path: true}
f.VarP(&opts.GroupBy, "group-by", "g", "`group` snapshots by host, paths and/or tags, separated by comma (disable grouping with '')")
f.BoolVarP(&opts.Force, "force", "f", false, `force re-reading the source files/directories (overrides the "parent" flag)`)
@@ -140,7 +139,9 @@ func (opts *BackupOptions) AddFlags(f *pflag.FlagSet) {
f.BoolVar(&opts.NoScan, "no-scan", false, "do not run scanner to estimate size of backup")
if runtime.GOOS == "windows" {
f.BoolVar(&opts.UseFsSnapshot, "use-fs-snapshot", false, "use filesystem snapshot where possible (currently only Windows VSS)")
f.BoolVar(&opts.ExcludeCloudFiles, "exclude-cloud-files", false, "excludes online-only cloud files (such as OneDrive Files On-Demand)")
}
if runtime.GOOS == "windows" || runtime.GOOS == "darwin" {
f.BoolVar(&opts.ExcludeCloudFiles, "exclude-cloud-files", false, "excludes online-only cloud files (such as OneDrive, iCloud drive, …)")
}
f.BoolVar(&opts.SkipIfUnchanged, "skip-if-unchanged", false, "skip snapshot creation if identical to parent snapshot")
@@ -159,13 +160,16 @@ var backupFSTestHook func(fs fs.FS) fs.FS
// ErrInvalidSourceData is used to report an incomplete backup
var ErrInvalidSourceData = errors.New("at least one source file could not be read")
// ErrNoSourceData is used to report that no source data was found
var ErrNoSourceData = errors.Fatal("all source directories/files do not exist")
// filterExisting returns a slice of all existing items, or an error if no
// items exist at all.
func filterExisting(items []string) (result []string, err error) {
func filterExisting(items []string, warnf func(msg string, args ...interface{})) (result []string, err error) {
for _, item := range items {
_, err := fs.Lstat(item)
if errors.Is(err, os.ErrNotExist) {
Warnf("%v does not exist, skipping\n", item)
warnf("%v does not exist, skipping\n", item)
continue
}
@@ -173,10 +177,12 @@ func filterExisting(items []string) (result []string, err error) {
}
if len(result) == 0 {
return nil, errors.Fatal("all source directories/files do not exist")
return nil, ErrNoSourceData
} else if len(result) < len(items) {
return result, ErrInvalidSourceData
}
return
return result, nil
}
// readLines reads all lines from the named file and returns them as a
@@ -185,7 +191,7 @@ func filterExisting(items []string) (result []string, err error) {
// If filename is empty, readPatternsFromFile returns an empty slice.
// If filename is a dash (-), readPatternsFromFile will read the lines from the
// standard input.
func readLines(filename string) ([]string, error) {
func readLines(filename string, stdin io.ReadCloser) ([]string, error) {
if filename == "" {
return nil, nil
}
@@ -196,7 +202,7 @@ func readLines(filename string) ([]string, error) {
)
if filename == "-" {
data, err = io.ReadAll(os.Stdin)
data, err = io.ReadAll(stdin)
} else {
data, err = textfile.Read(filename)
}
@@ -221,8 +227,8 @@ func readLines(filename string) ([]string, error) {
// readFilenamesFromFileRaw reads a list of filenames from the given file,
// or stdin if filename is "-". Each filename is terminated by a zero byte,
// which is stripped off.
func readFilenamesFromFileRaw(filename string) (names []string, err error) {
f := os.Stdin
func readFilenamesFromFileRaw(filename string, stdin io.ReadCloser) (names []string, err error) {
f := stdin
if filename != "-" {
if f, err = os.Open(filename); err != nil {
return nil, err
@@ -271,8 +277,8 @@ func readFilenamesRaw(r io.Reader) (names []string, err error) {
}
// Check returns an error when an invalid combination of options was set.
func (opts BackupOptions) Check(gopts GlobalOptions, args []string) error {
if gopts.password == "" && !gopts.InsecureNoPassword {
func (opts BackupOptions) Check(gopts global.Options, args []string) error {
if gopts.Password == "" && !gopts.InsecureNoPassword {
if opts.Stdin {
return errors.Fatal("cannot read both password and data from stdin")
}
@@ -306,7 +312,7 @@ func (opts BackupOptions) Check(gopts GlobalOptions, args []string) error {
// collectRejectByNameFuncs returns a list of all functions which may reject data
// from being saved in a snapshot based on path only
func collectRejectByNameFuncs(opts BackupOptions, repo *repository.Repository) (fs []archiver.RejectByNameFunc, err error) {
func collectRejectByNameFuncs(opts BackupOptions, repo *repository.Repository, warnf func(msg string, args ...interface{})) (fs []archiver.RejectByNameFunc, err error) {
// exclude restic cache
if repo.Cache() != nil {
f, err := rejectResticCache(repo)
@@ -317,7 +323,7 @@ func collectRejectByNameFuncs(opts BackupOptions, repo *repository.Repository) (
fs = append(fs, f)
}
fsPatterns, err := opts.ExcludePatternOptions.CollectPatterns(Warnf)
fsPatterns, err := opts.ExcludePatternOptions.CollectPatterns(warnf)
if err != nil {
return nil, err
}
@@ -330,7 +336,7 @@ func collectRejectByNameFuncs(opts BackupOptions, repo *repository.Repository) (
// collectRejectFuncs returns a list of all functions which may reject data
// from being saved in a snapshot based on path and file info
func collectRejectFuncs(opts BackupOptions, targets []string, fs fs.FS) (funcs []archiver.RejectFunc, err error) {
func collectRejectFuncs(opts BackupOptions, targets []string, fs fs.FS, warnf func(msg string, args ...interface{})) (funcs []archiver.RejectFunc, err error) {
// allowed devices
if opts.ExcludeOtherFS && !opts.Stdin && !opts.StdinCommand {
f, err := archiver.RejectByDevice(targets, fs)
@@ -354,10 +360,7 @@ func collectRejectFuncs(opts BackupOptions, targets []string, fs fs.FS) (funcs [
}
if opts.ExcludeCloudFiles && !opts.Stdin && !opts.StdinCommand {
if runtime.GOOS != "windows" {
return nil, errors.Fatalf("exclude-cloud-files is only supported on Windows")
}
f, err := archiver.RejectCloudFiles(Warnf)
f, err := archiver.RejectCloudFiles(warnf)
if err != nil {
return nil, err
}
@@ -369,7 +372,7 @@ func collectRejectFuncs(opts BackupOptions, targets []string, fs fs.FS) (funcs [
}
for _, spec := range opts.ExcludeIfPresent {
f, err := archiver.RejectIfPresent(spec, Warnf)
f, err := archiver.RejectIfPresent(spec, warnf)
if err != nil {
return nil, err
}
@@ -381,13 +384,13 @@ func collectRejectFuncs(opts BackupOptions, targets []string, fs fs.FS) (funcs [
}
// collectTargets returns a list of target files/dirs from several sources.
func collectTargets(opts BackupOptions, args []string) (targets []string, err error) {
func collectTargets(opts BackupOptions, args []string, warnf func(msg string, args ...interface{}), stdin io.ReadCloser) (targets []string, err error) {
if opts.Stdin || opts.StdinCommand {
return nil, nil
}
for _, file := range opts.FilesFrom {
fromfile, err := readLines(file)
fromfile, err := readLines(file, stdin)
if err != nil {
return nil, err
}
@@ -405,14 +408,14 @@ func collectTargets(opts BackupOptions, args []string) (targets []string, err er
return nil, fmt.Errorf("pattern: %s: %w", line, err)
}
if len(expanded) == 0 {
Warnf("pattern %q does not match any files, skipping\n", line)
warnf("pattern %q does not match any files, skipping\n", line)
}
targets = append(targets, expanded...)
}
}
for _, file := range opts.FilesFromVerbatim {
fromfile, err := readLines(file)
fromfile, err := readLines(file, stdin)
if err != nil {
return nil, err
}
@@ -425,7 +428,7 @@ func collectTargets(opts BackupOptions, args []string) (targets []string, err er
}
for _, file := range opts.FilesFromRaw {
fromfile, err := readFilenamesFromFileRaw(file)
fromfile, err := readFilenamesFromFileRaw(file, stdin)
if err != nil {
return nil, err
}
@@ -439,17 +442,12 @@ func collectTargets(opts BackupOptions, args []string) (targets []string, err er
return nil, errors.Fatal("nothing to backup, please specify source files/dirs")
}
targets, err = filterExisting(targets)
if err != nil {
return nil, err
}
return targets, nil
return filterExisting(targets, warnf)
}
// parent returns the ID of the parent snapshot. If there is none, nil is
// returned.
func findParentSnapshot(ctx context.Context, repo restic.ListerLoaderUnpacked, opts BackupOptions, targets []string, timeStampLimit time.Time) (*restic.Snapshot, error) {
func findParentSnapshot(ctx context.Context, repo restic.ListerLoaderUnpacked, opts BackupOptions, targets []string, timeStampLimit time.Time) (*data.Snapshot, error) {
if opts.Force {
return nil, nil
}
@@ -458,7 +456,7 @@ func findParentSnapshot(ctx context.Context, repo restic.ListerLoaderUnpacked, o
if snName == "" {
snName = "latest"
}
f := restic.SnapshotFilter{TimestampLimit: timeStampLimit}
f := data.SnapshotFilter{TimestampLimit: timeStampLimit}
if opts.GroupBy.Host {
f.Hosts = []string{opts.Host}
}
@@ -466,23 +464,29 @@ func findParentSnapshot(ctx context.Context, repo restic.ListerLoaderUnpacked, o
f.Paths = targets
}
if opts.GroupBy.Tag {
f.Tags = []restic.TagList{opts.Tags.Flatten()}
f.Tags = []data.TagList{opts.Tags.Flatten()}
}
sn, _, err := f.FindLatest(ctx, repo, repo, snName)
// Snapshot not found is ok if no explicit parent was set
if opts.Parent == "" && errors.Is(err, restic.ErrNoSnapshotFound) {
if opts.Parent == "" && errors.Is(err, data.ErrNoSnapshotFound) {
err = nil
}
return sn, err
}
func runBackup(ctx context.Context, opts BackupOptions, gopts GlobalOptions, term *termstatus.Terminal, args []string) error {
func runBackup(ctx context.Context, opts BackupOptions, gopts global.Options, term ui.Terminal, args []string) error {
var vsscfg fs.VSSConfig
var err error
var printer backup.ProgressPrinter
if gopts.JSON {
printer = backup.NewJSONProgress(term, gopts.Verbosity)
} else {
printer = backup.NewTextProgress(term, gopts.Verbosity)
}
if runtime.GOOS == "windows" {
if vsscfg, err = fs.ParseVSSConfig(gopts.extended); err != nil {
if vsscfg, err = fs.ParseVSSConfig(gopts.Extended); err != nil {
return err
}
}
@@ -492,47 +496,46 @@ func runBackup(ctx context.Context, opts BackupOptions, gopts GlobalOptions, ter
return err
}
targets, err := collectTargets(opts, args)
success := true
targets, err := collectTargets(opts, args, printer.E, term.InputRaw())
if err != nil {
return err
if errors.Is(err, ErrInvalidSourceData) {
success = false
} else {
return err
}
}
timeStamp := time.Now()
backupStart := timeStamp
if opts.TimeStamp != "" {
timeStamp, err = time.ParseInLocation(TimeFormat, opts.TimeStamp, time.Local)
timeStamp, err = time.ParseInLocation(global.TimeFormat, opts.TimeStamp, time.Local)
if err != nil {
return errors.Fatalf("error in time option: %v\n", err)
return errors.Fatalf("error in time option: %v", err)
}
}
if gopts.verbosity >= 2 && !gopts.JSON {
Verbosef("open repository\n")
if gopts.Verbosity >= 2 && !gopts.JSON {
printer.P("open repository")
}
ctx, repo, unlock, err := openWithAppendLock(ctx, gopts, opts.DryRun)
ctx, repo, unlock, err := openWithAppendLock(ctx, gopts, opts.DryRun, printer)
if err != nil {
return err
}
defer unlock()
var progressPrinter backup.ProgressPrinter
if gopts.JSON {
progressPrinter = backup.NewJSONProgress(term, gopts.verbosity)
} else {
progressPrinter = backup.NewTextProgress(term, gopts.verbosity)
}
progressReporter := backup.NewProgress(progressPrinter,
calculateProgressInterval(!gopts.Quiet, gopts.JSON))
progressReporter := backup.NewProgress(printer,
ui.CalculateProgressInterval(!gopts.Quiet, gopts.JSON, term.CanUpdateStatus()))
defer progressReporter.Done()
// rejectByNameFuncs collect functions that can reject items from the backup based on path only
rejectByNameFuncs, err := collectRejectByNameFuncs(opts, repo)
rejectByNameFuncs, err := collectRejectByNameFuncs(opts, repo, printer.E)
if err != nil {
return err
}
var parentSnapshot *restic.Snapshot
var parentSnapshot *data.Snapshot
if !opts.Stdin {
parentSnapshot, err = findParentSnapshot(ctx, repo, opts, targets, timeStamp)
if err != nil {
@@ -541,19 +544,18 @@ func runBackup(ctx context.Context, opts BackupOptions, gopts GlobalOptions, ter
if !gopts.JSON {
if parentSnapshot != nil {
progressPrinter.P("using parent snapshot %v\n", parentSnapshot.ID().Str())
printer.P("using parent snapshot %v\n", parentSnapshot.ID().Str())
} else {
progressPrinter.P("no parent snapshot found, will read all files\n")
printer.P("no parent snapshot found, will read all files\n")
}
}
}
if !gopts.JSON {
progressPrinter.V("load index files")
printer.V("load index files")
}
bar := newIndexTerminalProgress(gopts.Quiet, gopts.JSON, term)
err = repo.LoadIndex(ctx, bar)
err = repo.LoadIndex(ctx, printer)
if err != nil {
return err
}
@@ -570,7 +572,7 @@ func runBackup(ctx context.Context, opts BackupOptions, gopts GlobalOptions, ter
messageHandler := func(msg string, args ...interface{}) {
if !gopts.JSON {
progressPrinter.P(msg, args...)
printer.P(msg, args...)
}
}
@@ -581,12 +583,12 @@ func runBackup(ctx context.Context, opts BackupOptions, gopts GlobalOptions, ter
if opts.Stdin || opts.StdinCommand {
if !gopts.JSON {
progressPrinter.V("read data from stdin")
printer.V("read data from stdin")
}
filename := path.Join("/", opts.StdinFilename)
var source io.ReadCloser = os.Stdin
source := term.InputRaw()
if opts.StdinCommand {
source, err = fs.NewCommandReader(ctx, args, globalOptions.stderr)
source, err = fs.NewCommandReader(ctx, args, printer.E)
if err != nil {
return err
}
@@ -606,7 +608,7 @@ func runBackup(ctx context.Context, opts BackupOptions, gopts GlobalOptions, ter
}
// rejectFuncs collect functions that can reject items from the backup based on path and file info
rejectFuncs, err := collectRejectFuncs(opts, targets, targetFS)
rejectFuncs, err := collectRejectFuncs(opts, targets, targetFS, printer.E)
if err != nil {
return err
}
@@ -622,11 +624,11 @@ func runBackup(ctx context.Context, opts BackupOptions, gopts GlobalOptions, ter
sc := archiver.NewScanner(targetFS)
sc.SelectByName = selectByNameFilter
sc.Select = selectFilter
sc.Error = progressPrinter.ScannerError
sc.Error = printer.ScannerError
sc.Result = progressReporter.ReportTotal
if !gopts.JSON {
progressPrinter.V("start scan on %v", targets)
printer.V("start scan on %v", targets)
}
wg.Go(func() error { return sc.Scan(cancelCtx, targets) })
}
@@ -635,7 +637,7 @@ func runBackup(ctx context.Context, opts BackupOptions, gopts GlobalOptions, ter
arch.SelectByName = selectByNameFilter
arch.Select = selectFilter
arch.WithAtime = opts.WithAtime
success := true
arch.Error = func(item string, err error) error {
success = false
reterr := progressReporter.Error(item, err)
@@ -666,12 +668,12 @@ func runBackup(ctx context.Context, opts BackupOptions, gopts GlobalOptions, ter
Time: timeStamp,
Hostname: opts.Host,
ParentSnapshot: parentSnapshot,
ProgramVersion: "restic " + version,
ProgramVersion: "restic " + global.Version,
SkipIfUnchanged: opts.SkipIfUnchanged,
}
if !gopts.JSON {
progressPrinter.V("start backup on %v", targets)
printer.V("start backup on %v", targets)
}
_, id, summary, err := arch.Snapshot(ctx, targets, snapshotOpts)

View File

@@ -3,33 +3,34 @@ package main
import (
"context"
"fmt"
"io"
"os"
"path/filepath"
"runtime"
"testing"
"time"
"github.com/restic/restic/internal/data"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/fs"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/restic"
rtest "github.com/restic/restic/internal/test"
"github.com/restic/restic/internal/ui/termstatus"
)
func testRunBackupAssumeFailure(t testing.TB, dir string, target []string, opts BackupOptions, gopts GlobalOptions) error {
return withTermStatus(gopts, func(ctx context.Context, term *termstatus.Terminal) error {
func testRunBackupAssumeFailure(t testing.TB, dir string, target []string, opts BackupOptions, gopts global.Options) error {
return withTermStatus(t, gopts, func(ctx context.Context, gopts global.Options) error {
t.Logf("backing up %v in %v", target, dir)
if dir != "" {
cleanup := rtest.Chdir(t, dir)
defer cleanup()
}
opts.GroupBy = restic.SnapshotGroupByOptions{Host: true, Path: true}
return runBackup(ctx, opts, gopts, term, target)
opts.GroupBy = data.SnapshotGroupByOptions{Host: true, Path: true}
return runBackup(ctx, opts, gopts, gopts.Term, target)
})
}
func testRunBackup(t testing.TB, dir string, target []string, opts BackupOptions, gopts GlobalOptions) {
func testRunBackup(t testing.TB, dir string, target []string, opts BackupOptions, gopts global.Options) {
err := testRunBackupAssumeFailure(t, dir, target, opts, gopts)
rtest.Assert(t, err == nil, "Error while backing up: %v", err)
}
@@ -56,13 +57,13 @@ func testBackup(t *testing.T, useFsSnapshot bool) {
testListSnapshots(t, env.gopts, 1)
testRunCheck(t, env.gopts)
stat1 := dirStats(env.repo)
stat1 := dirStats(t, env.repo)
// second backup, implicit incremental
testRunBackup(t, "", []string{env.testdata}, opts, env.gopts)
snapshotIDs := testListSnapshots(t, env.gopts, 2)
stat2 := dirStats(env.repo)
stat2 := dirStats(t, env.repo)
if stat2.size > stat1.size+stat1.size/10 {
t.Error("repository size has grown by more than 10 percent")
}
@@ -74,7 +75,7 @@ func testBackup(t *testing.T, useFsSnapshot bool) {
testRunBackup(t, "", []string{env.testdata}, opts, env.gopts)
snapshotIDs = testListSnapshots(t, env.gopts, 3)
stat3 := dirStats(env.repo)
stat3 := dirStats(t, env.repo)
if stat3.size > stat1.size+stat1.size/10 {
t.Error("repository size has grown by more than 10 percent")
}
@@ -85,7 +86,7 @@ func testBackup(t *testing.T, useFsSnapshot bool) {
restoredir := filepath.Join(env.base, fmt.Sprintf("restore%d", i))
t.Logf("restoring snapshot %v to %v", snapshotID.Str(), restoredir)
testRunRestore(t, env.gopts, restoredir, snapshotID.String()+":"+toPathInSnapshot(filepath.Dir(env.testdata)))
diff := directoriesContentsDiff(env.testdata, filepath.Join(restoredir, "testdata"))
diff := directoriesContentsDiff(t, env.testdata, filepath.Join(restoredir, "testdata"))
rtest.Assert(t, diff == "", "directories are not equal: %v", diff)
}
@@ -218,41 +219,41 @@ func TestDryRunBackup(t *testing.T) {
// dry run before first backup
testRunBackup(t, filepath.Dir(env.testdata), []string{"testdata"}, dryOpts, env.gopts)
snapshotIDs := testListSnapshots(t, env.gopts, 0)
packIDs := testRunList(t, "packs", env.gopts)
packIDs := testRunList(t, env.gopts, "packs")
rtest.Assert(t, len(packIDs) == 0,
"expected no data, got %v", snapshotIDs)
indexIDs := testRunList(t, "index", env.gopts)
indexIDs := testRunList(t, env.gopts, "index")
rtest.Assert(t, len(indexIDs) == 0,
"expected no index, got %v", snapshotIDs)
// first backup
testRunBackup(t, filepath.Dir(env.testdata), []string{"testdata"}, opts, env.gopts)
snapshotIDs = testListSnapshots(t, env.gopts, 1)
packIDs = testRunList(t, "packs", env.gopts)
indexIDs = testRunList(t, "index", env.gopts)
packIDs = testRunList(t, env.gopts, "packs")
indexIDs = testRunList(t, env.gopts, "index")
// dry run between backups
testRunBackup(t, filepath.Dir(env.testdata), []string{"testdata"}, dryOpts, env.gopts)
snapshotIDsAfter := testListSnapshots(t, env.gopts, 1)
rtest.Equals(t, snapshotIDs, snapshotIDsAfter)
dataIDsAfter := testRunList(t, "packs", env.gopts)
dataIDsAfter := testRunList(t, env.gopts, "packs")
rtest.Equals(t, packIDs, dataIDsAfter)
indexIDsAfter := testRunList(t, "index", env.gopts)
indexIDsAfter := testRunList(t, env.gopts, "index")
rtest.Equals(t, indexIDs, indexIDsAfter)
// second backup, implicit incremental
testRunBackup(t, filepath.Dir(env.testdata), []string{"testdata"}, opts, env.gopts)
snapshotIDs = testListSnapshots(t, env.gopts, 2)
packIDs = testRunList(t, "packs", env.gopts)
indexIDs = testRunList(t, "index", env.gopts)
packIDs = testRunList(t, env.gopts, "packs")
indexIDs = testRunList(t, env.gopts, "index")
// another dry run
testRunBackup(t, filepath.Dir(env.testdata), []string{"testdata"}, dryOpts, env.gopts)
snapshotIDsAfter = testListSnapshots(t, env.gopts, 2)
rtest.Equals(t, snapshotIDs, snapshotIDsAfter)
dataIDsAfter = testRunList(t, "packs", env.gopts)
dataIDsAfter = testRunList(t, env.gopts, "packs")
rtest.Equals(t, packIDs, dataIDsAfter)
indexIDsAfter = testRunList(t, "index", env.gopts)
indexIDsAfter = testRunList(t, env.gopts, "index")
rtest.Equals(t, indexIDs, indexIDsAfter)
}
@@ -262,22 +263,27 @@ func TestBackupNonExistingFile(t *testing.T) {
testSetupBackupData(t, env)
_ = withRestoreGlobalOptions(func() error {
globalOptions.stderr = io.Discard
p := filepath.Join(env.testdata, "0", "0", "9")
dirs := []string{
filepath.Join(p, "0"),
filepath.Join(p, "1"),
filepath.Join(p, "nonexisting"),
filepath.Join(p, "5"),
}
p := filepath.Join(env.testdata, "0", "0", "9")
dirs := []string{
filepath.Join(p, "0"),
filepath.Join(p, "1"),
filepath.Join(p, "nonexisting"),
filepath.Join(p, "5"),
}
opts := BackupOptions{}
opts := BackupOptions{}
testRunBackup(t, "", dirs, opts, env.gopts)
return nil
})
// mix of existing and non-existing files
err := testRunBackupAssumeFailure(t, "", dirs, opts, env.gopts)
rtest.Assert(t, err != nil, "expected error for non-existing file")
rtest.Assert(t, errors.Is(err, ErrInvalidSourceData), "expected ErrInvalidSourceData; got %v", err)
// only non-existing file
dirs = []string{
filepath.Join(p, "nonexisting"),
}
err = testRunBackupAssumeFailure(t, "", dirs, opts, env.gopts)
rtest.Assert(t, err != nil, "expected error for non-existing file")
rtest.Assert(t, errors.Is(err, ErrNoSourceData), "expected ErrNoSourceData; got %v", err)
}
func TestBackupSelfHealing(t *testing.T) {
@@ -438,13 +444,13 @@ func TestIncrementalBackup(t *testing.T) {
testRunBackup(t, "", []string{datadir}, opts, env.gopts)
testRunCheck(t, env.gopts)
stat1 := dirStats(env.repo)
stat1 := dirStats(t, env.repo)
rtest.OK(t, appendRandomData(testfile, incrementalSecondWrite))
testRunBackup(t, "", []string{datadir}, opts, env.gopts)
testRunCheck(t, env.gopts)
stat2 := dirStats(env.repo)
stat2 := dirStats(t, env.repo)
if stat2.size-stat1.size > incrementalFirstWrite {
t.Errorf("repository size has grown by more than %d bytes", incrementalFirstWrite)
}
@@ -454,14 +460,13 @@ func TestIncrementalBackup(t *testing.T) {
testRunBackup(t, "", []string{datadir}, opts, env.gopts)
testRunCheck(t, env.gopts)
stat3 := dirStats(env.repo)
stat3 := dirStats(t, env.repo)
if stat3.size-stat2.size > incrementalFirstWrite {
t.Errorf("repository size has grown by more than %d bytes", incrementalFirstWrite)
}
t.Logf("repository grown by %d bytes", stat3.size-stat2.size)
}
// nolint: staticcheck // false positive nil pointer dereference check
func TestBackupTags(t *testing.T) {
env, cleanup := withTestEnvironment(t)
defer cleanup()
@@ -481,7 +486,7 @@ func TestBackupTags(t *testing.T) {
"expected no tags, got %v", newest.Tags)
parent := newest
opts.Tags = restic.TagLists{[]string{"NL"}}
opts.Tags = data.TagLists{[]string{"NL"}}
testRunBackup(t, "", []string{env.testdata}, opts, env.gopts)
testRunCheck(t, env.gopts)
newest, _ = testRunSnapshots(t, env.gopts)
@@ -497,7 +502,6 @@ func TestBackupTags(t *testing.T) {
"expected parent to be %v, got %v", parent.ID, newest.Parent)
}
// nolint: staticcheck // false positive nil pointer dereference check
func TestBackupProgramVersion(t *testing.T) {
env, cleanup := withTestEnvironment(t)
defer cleanup()
@@ -509,7 +513,7 @@ func TestBackupProgramVersion(t *testing.T) {
if newest == nil {
t.Fatal("expected a backup, got nil")
}
resticVersion := "restic " + version
resticVersion := "restic " + global.Version
rtest.Assert(t, newest.ProgramVersion == resticVersion,
"expected %v, got %v", resticVersion, newest.ProgramVersion)
}
@@ -567,7 +571,7 @@ func TestHardLink(t *testing.T) {
restoredir := filepath.Join(env.base, fmt.Sprintf("restore%d", i))
t.Logf("restoring snapshot %v to %v", snapshotID.Str(), restoredir)
testRunRestore(t, env.gopts, restoredir, snapshotID.String())
diff := directoriesContentsDiff(env.testdata, filepath.Join(restoredir, "testdata"))
diff := directoriesContentsDiff(t, env.testdata, filepath.Join(restoredir, "testdata"))
rtest.Assert(t, diff == "", "directories are not equal %v", diff)
linkResults := createFileSetPerHardlink(filepath.Join(restoredir, "testdata"))
@@ -703,7 +707,7 @@ func TestBackupEmptyPassword(t *testing.T) {
env, cleanup := withTestEnvironment(t)
defer cleanup()
env.gopts.password = ""
env.gopts.Password = ""
env.gopts.InsecureNoPassword = true
testSetupBackupData(t, env)

View File

@@ -67,10 +67,13 @@ func TestCollectTargets(t *testing.T) {
FilesFromRaw: []string{f3.Name()},
}
targets, err := collectTargets(opts, []string{filepath.Join(dir, "cmdline arg")})
targets, err := collectTargets(opts, []string{filepath.Join(dir, "cmdline arg")}, t.Logf, nil)
rtest.OK(t, err)
sort.Strings(targets)
rtest.Equals(t, expect, targets)
_, err = collectTargets(opts, []string{filepath.Join(dir, "cmdline arg"), filepath.Join(dir, "non-existing-file")}, t.Logf, nil)
rtest.Assert(t, err == ErrInvalidSourceData, "expected error when not all targets exist")
}
func TestReadFilenamesRaw(t *testing.T) {

View File

@@ -10,13 +10,14 @@ import (
"github.com/restic/restic/internal/backend/cache"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/ui/table"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
func newCacheCommand() *cobra.Command {
func newCacheCommand(globalOptions *global.Options) *cobra.Command {
var opts CacheOptions
cmd := &cobra.Command{
@@ -34,7 +35,7 @@ Exit status is 1 if there was any error.
GroupID: cmdGroupDefault,
DisableAutoGenTag: true,
RunE: func(_ *cobra.Command, args []string) error {
return runCache(opts, globalOptions, args)
return runCache(opts, *globalOptions, args, globalOptions.Term)
},
}
@@ -55,7 +56,9 @@ func (opts *CacheOptions) AddFlags(f *pflag.FlagSet) {
f.BoolVar(&opts.NoSize, "no-size", false, "do not output the size of the cache directories")
}
func runCache(opts CacheOptions, gopts GlobalOptions, args []string) error {
func runCache(opts CacheOptions, gopts global.Options, args []string, term ui.Terminal) error {
printer := ui.NewProgressPrinter(false, gopts.Verbosity, term)
if len(args) > 0 {
return errors.Fatal("the cache command expects no arguments, only options - please see `restic help cache` for usage and flags")
}
@@ -83,17 +86,17 @@ func runCache(opts CacheOptions, gopts GlobalOptions, args []string) error {
}
if len(oldDirs) == 0 {
Verbosef("no old cache dirs found\n")
printer.P("no old cache dirs found")
return nil
}
Verbosef("remove %d old cache directories\n", len(oldDirs))
printer.P("remove %d old cache directories", len(oldDirs))
for _, item := range oldDirs {
dir := filepath.Join(cachedir, item.Name())
err = os.RemoveAll(dir)
if err != nil {
Warnf("unable to remove %v: %v\n", dir, err)
printer.E("unable to remove %v: %v", dir, err)
}
}
@@ -123,7 +126,7 @@ func runCache(opts CacheOptions, gopts GlobalOptions, args []string) error {
}
if len(dirs) == 0 {
Printf("no cache dirs found, basedir is %v\n", cachedir)
printer.S("no cache dirs found, basedir is %v", cachedir)
return nil
}
@@ -159,8 +162,8 @@ func runCache(opts CacheOptions, gopts GlobalOptions, args []string) error {
})
}
_ = tab.Write(globalOptions.stdout)
Printf("%d cache dirs in %s\n", len(dirs), cachedir)
_ = tab.Write(gopts.Term.OutputWriter())
printer.S("%d cache dirs in %s", len(dirs), cachedir)
return nil
}

View File

@@ -7,14 +7,17 @@ import (
"github.com/spf13/cobra"
"github.com/restic/restic/internal/data"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/ui"
)
var catAllowedCmds = []string{"config", "index", "snapshot", "key", "masterkey", "lock", "pack", "blob", "tree"}
func newCatCommand() *cobra.Command {
func newCatCommand(globalOptions *global.Options) *cobra.Command {
cmd := &cobra.Command{
Use: "cat [flags] [masterkey|config|pack ID|blob ID|snapshot ID|index ID|key ID|lock ID|tree snapshot:subfolder]",
Short: "Print internal objects to stdout",
@@ -33,7 +36,7 @@ Exit status is 12 if the password is incorrect.
GroupID: cmdGroupDefault,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runCat(cmd.Context(), globalOptions, args)
return runCat(cmd.Context(), *globalOptions, args, globalOptions.Term)
},
ValidArgs: catAllowedCmds,
}
@@ -63,12 +66,14 @@ func validateCatArgs(args []string) error {
return nil
}
func runCat(ctx context.Context, gopts GlobalOptions, args []string) error {
func runCat(ctx context.Context, gopts global.Options, args []string, term ui.Terminal) error {
printer := ui.NewProgressPrinter(gopts.JSON, gopts.Verbosity, term)
if err := validateCatArgs(args); err != nil {
return err
}
ctx, repo, unlock, err := openWithReadLock(ctx, gopts, gopts.NoLock)
ctx, repo, unlock, err := openWithReadLock(ctx, gopts, gopts.NoLock, printer)
if err != nil {
return err
}
@@ -80,7 +85,7 @@ func runCat(ctx context.Context, gopts GlobalOptions, args []string) error {
if tpe != "masterkey" && tpe != "config" && tpe != "snapshot" && tpe != "tree" {
id, err = restic.ParseID(args[1])
if err != nil {
return errors.Fatalf("unable to parse ID: %v\n", err)
return errors.Fatalf("unable to parse ID: %v", err)
}
}
@@ -91,7 +96,7 @@ func runCat(ctx context.Context, gopts GlobalOptions, args []string) error {
return err
}
Println(string(buf))
printer.S(string(buf))
return nil
case "index":
buf, err := repo.LoadUnpacked(ctx, restic.IndexFile, id)
@@ -99,12 +104,12 @@ func runCat(ctx context.Context, gopts GlobalOptions, args []string) error {
return err
}
Println(string(buf))
printer.S(string(buf))
return nil
case "snapshot":
sn, _, err := restic.FindSnapshot(ctx, repo, repo, args[1])
sn, _, err := data.FindSnapshot(ctx, repo, repo, args[1])
if err != nil {
return errors.Fatalf("could not find snapshot: %v\n", err)
return errors.Fatalf("could not find snapshot: %v", err)
}
buf, err := json.MarshalIndent(sn, "", " ")
@@ -112,7 +117,7 @@ func runCat(ctx context.Context, gopts GlobalOptions, args []string) error {
return err
}
Println(string(buf))
printer.S(string(buf))
return nil
case "key":
key, err := repository.LoadKey(ctx, repo, id)
@@ -125,7 +130,7 @@ func runCat(ctx context.Context, gopts GlobalOptions, args []string) error {
return err
}
Println(string(buf))
printer.S(string(buf))
return nil
case "masterkey":
buf, err := json.MarshalIndent(repo.Key(), "", " ")
@@ -133,7 +138,7 @@ func runCat(ctx context.Context, gopts GlobalOptions, args []string) error {
return err
}
Println(string(buf))
printer.S(string(buf))
return nil
case "lock":
lock, err := restic.LoadLock(ctx, repo, id)
@@ -146,7 +151,7 @@ func runCat(ctx context.Context, gopts GlobalOptions, args []string) error {
return err
}
Println(string(buf))
printer.S(string(buf))
return nil
case "pack":
@@ -158,15 +163,14 @@ func runCat(ctx context.Context, gopts GlobalOptions, args []string) error {
hash := restic.Hash(buf)
if !hash.Equal(id) {
Warnf("Warning: hash of data does not match ID, want\n %v\ngot:\n %v\n", id.String(), hash.String())
printer.E("Warning: hash of data does not match ID, want\n %v\ngot:\n %v", id.String(), hash.String())
}
_, err = globalOptions.stdout.Write(buf)
_, err = term.OutputRaw().Write(buf)
return err
case "blob":
bar := newIndexProgress(gopts.Quiet, gopts.JSON)
err = repo.LoadIndex(ctx, bar)
err = repo.LoadIndex(ctx, printer)
if err != nil {
return err
}
@@ -181,25 +185,24 @@ func runCat(ctx context.Context, gopts GlobalOptions, args []string) error {
return err
}
_, err = globalOptions.stdout.Write(buf)
_, err = term.OutputRaw().Write(buf)
return err
}
return errors.Fatal("blob not found")
case "tree":
sn, subfolder, err := restic.FindSnapshot(ctx, repo, repo, args[1])
sn, subfolder, err := data.FindSnapshot(ctx, repo, repo, args[1])
if err != nil {
return errors.Fatalf("could not find snapshot: %v\n", err)
return errors.Fatalf("could not find snapshot: %v", err)
}
bar := newIndexProgress(gopts.Quiet, gopts.JSON)
err = repo.LoadIndex(ctx, bar)
err = repo.LoadIndex(ctx, printer)
if err != nil {
return err
}
sn.Tree, err = restic.FindTreeDirectory(ctx, repo, sn.Tree, subfolder)
sn.Tree, err = data.FindTreeDirectory(ctx, repo, sn.Tree, subfolder)
if err != nil {
return err
}
@@ -208,7 +211,7 @@ func runCat(ctx context.Context, gopts GlobalOptions, args []string) error {
if err != nil {
return err
}
_, err = globalOptions.stdout.Write(buf)
_, err = term.OutputRaw().Write(buf)
return err
default:

View File

@@ -15,15 +15,16 @@ import (
"github.com/restic/restic/internal/backend/cache"
"github.com/restic/restic/internal/checker"
"github.com/restic/restic/internal/data"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/ui/progress"
"github.com/restic/restic/internal/ui/termstatus"
)
func newCheckCommand() *cobra.Command {
func newCheckCommand(globalOptions *global.Options) *cobra.Command {
var opts CheckOptions
cmd := &cobra.Command{
Use: "check [flags]",
@@ -47,14 +48,13 @@ Exit status is 12 if the password is incorrect.
GroupID: cmdGroupDefault,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
term, cancel := setupTermstatus()
defer cancel()
summary, err := runCheck(cmd.Context(), opts, globalOptions, args, term)
finalizeSnapshotFilter(&opts.SnapshotFilter)
summary, err := runCheck(cmd.Context(), opts, *globalOptions, args, globalOptions.Term)
if globalOptions.JSON {
if err != nil && summary.NumErrors == 0 {
summary.NumErrors = 1
}
term.Print(ui.ToJSONString(summary))
globalOptions.Term.Print(ui.ToJSONString(summary))
}
return err
},
@@ -73,6 +73,7 @@ type CheckOptions struct {
ReadDataSubset string
CheckUnused bool
WithCache bool
data.SnapshotFilter
}
func (opts *CheckOptions) AddFlags(f *pflag.FlagSet) {
@@ -86,6 +87,7 @@ func (opts *CheckOptions) AddFlags(f *pflag.FlagSet) {
panic(err)
}
f.BoolVar(&opts.WithCache, "with-cache", false, "use existing cache, only read uncached data from repository")
initMultiSnapshotFilter(f, &opts.SnapshotFilter, true)
}
func checkFlags(opts CheckOptions) error {
@@ -173,7 +175,7 @@ func parsePercentage(s string) (float64, error) {
// - if the user explicitly requested --no-cache, we don't use any cache
// - if the user provides --cache-dir, we use a cache in a temporary sub-directory of the specified directory and the sub-directory is deleted after the check
// - by default, we use a cache in a temporary directory that is deleted after the check
func prepareCheckCache(opts CheckOptions, gopts *GlobalOptions, printer progress.Printer) (cleanup func()) {
func prepareCheckCache(opts CheckOptions, gopts *global.Options, printer progress.Printer) (cleanup func()) {
cleanup = func() {}
if opts.WithCache {
// use the default cache, no setup needed
@@ -194,7 +196,7 @@ func prepareCheckCache(opts CheckOptions, gopts *GlobalOptions, printer progress
// use a cache in a temporary directory
err := os.MkdirAll(cachedir, 0755)
if err != nil {
Warnf("unable to create cache directory %s, disabling cache: %v\n", cachedir, err)
printer.E("unable to create cache directory %s, disabling cache: %v", cachedir, err)
gopts.NoCache = true
return cleanup
}
@@ -220,15 +222,12 @@ func prepareCheckCache(opts CheckOptions, gopts *GlobalOptions, printer progress
return cleanup
}
func runCheck(ctx context.Context, opts CheckOptions, gopts GlobalOptions, args []string, term *termstatus.Terminal) (checkSummary, error) {
func runCheck(ctx context.Context, opts CheckOptions, gopts global.Options, args []string, term ui.Terminal) (checkSummary, error) {
summary := checkSummary{MessageType: "summary"}
if len(args) != 0 {
return summary, errors.Fatal("the check command expects no arguments, only options - please see `restic help check` for usage and flags")
}
var printer progress.Printer
if !gopts.JSON {
printer = newTerminalProgressPrinter(gopts.verbosity, term)
printer = ui.NewProgressPrinter(gopts.JSON, gopts.Verbosity, term)
} else {
printer = newJSONErrorPrinter(term)
}
@@ -239,21 +238,20 @@ func runCheck(ctx context.Context, opts CheckOptions, gopts GlobalOptions, args
if !gopts.NoLock {
printer.P("create exclusive lock for repository\n")
}
ctx, repo, unlock, err := openWithExclusiveLock(ctx, gopts, gopts.NoLock)
ctx, repo, unlock, err := openWithExclusiveLock(ctx, gopts, gopts.NoLock, printer)
if err != nil {
return summary, err
}
defer unlock()
chkr := checker.New(repo, opts.CheckUnused)
err = chkr.LoadSnapshots(ctx)
err = chkr.LoadSnapshots(ctx, &opts.SnapshotFilter, args)
if err != nil {
return summary, err
}
printer.P("load indexes\n")
bar := newIndexTerminalProgress(gopts.Quiet, gopts.JSON, term)
hints, errs := chkr.LoadIndex(ctx, bar)
hints, errs := chkr.LoadIndex(ctx, printer)
if ctx.Err() != nil {
return summary, ctx.Err()
}
@@ -261,10 +259,10 @@ func runCheck(ctx context.Context, opts CheckOptions, gopts GlobalOptions, args
errorsFound := false
for _, hint := range hints {
switch hint.(type) {
case *checker.ErrDuplicatePacks:
case *repository.ErrDuplicatePacks:
printer.S("%s", hint.Error())
summary.HintRepairIndex = true
case *checker.ErrMixedPack:
case *repository.ErrMixedPack:
printer.S("%s", hint.Error())
summary.HintPrune = true
default:
@@ -299,7 +297,7 @@ func runCheck(ctx context.Context, opts CheckOptions, gopts GlobalOptions, args
go chkr.Packs(ctx, errChan)
for err := range errChan {
var packErr *checker.PackError
var packErr *repository.PackError
if errors.As(err, &packErr) {
if packErr.Orphaned {
orphanedPacks++
@@ -363,6 +361,7 @@ func runCheck(ctx context.Context, opts CheckOptions, gopts GlobalOptions, args
return summary, ctx.Err()
}
// the following block only used for tests
if opts.CheckUnused {
unused, err := chkr.UnusedBlobs(ctx)
if err != nil {
@@ -374,12 +373,16 @@ func runCheck(ctx context.Context, opts CheckOptions, gopts GlobalOptions, args
}
}
doReadData := func(packs map[restic.ID]int64) {
readDataFilter, err := buildPacksFilter(opts, printer, chkr.IsFiltered())
if err != nil {
return summary, err
}
if readDataFilter != nil {
p := printer.NewCounter("packs")
p.SetMax(uint64(len(packs)))
errChan := make(chan error)
go chkr.ReadPacks(ctx, packs, p, errChan)
go chkr.ReadPacks(ctx, readDataFilter, p, errChan)
for err := range errChan {
errorsFound = true
@@ -392,48 +395,6 @@ func runCheck(ctx context.Context, opts CheckOptions, gopts GlobalOptions, args
p.Done()
}
switch {
case opts.ReadData:
printer.P("read all data\n")
doReadData(selectPacksByBucket(chkr.GetPacks(), 1, 1))
case opts.ReadDataSubset != "":
var packs map[restic.ID]int64
dataSubset, err := stringToIntSlice(opts.ReadDataSubset)
if err == nil {
bucket := dataSubset[0]
totalBuckets := dataSubset[1]
packs = selectPacksByBucket(chkr.GetPacks(), bucket, totalBuckets)
packCount := uint64(len(packs))
printer.P("read group #%d of %d data packs (out of total %d packs in %d groups)\n", bucket, packCount, chkr.CountPacks(), totalBuckets)
} else if strings.HasSuffix(opts.ReadDataSubset, "%") {
percentage, err := parsePercentage(opts.ReadDataSubset)
if err == nil {
packs = selectRandomPacksByPercentage(chkr.GetPacks(), percentage)
printer.P("read %.1f%% of data packs\n", percentage)
}
} else {
repoSize := int64(0)
allPacks := chkr.GetPacks()
for _, size := range allPacks {
repoSize += size
}
if repoSize == 0 {
return summary, errors.Fatal("Cannot read from a repository having size 0")
}
subsetSize, _ := ui.ParseBytes(opts.ReadDataSubset)
if subsetSize > repoSize {
subsetSize = repoSize
}
packs = selectRandomPacksByFileSize(chkr.GetPacks(), subsetSize, repoSize)
percentage := float64(subsetSize) / float64(repoSize) * 100.0
printer.P("read %d bytes (%.1f%%) of data packs\n", subsetSize, percentage)
}
if packs == nil {
return summary, errors.Fatal("internal error: failed to select packs to check")
}
doReadData(packs)
}
if len(salvagePacks) > 0 {
printer.E("\nThe repository contains damaged pack files. These damaged files must be removed to repair the repository. This can be done using the following commands. Please read the troubleshooting guide at https://restic.readthedocs.io/en/stable/077_troubleshooting.html first.\n\n")
for id := range salvagePacks {
@@ -457,6 +418,64 @@ func runCheck(ctx context.Context, opts CheckOptions, gopts GlobalOptions, args
return summary, nil
}
func buildPacksFilter(opts CheckOptions, printer progress.Printer,
filteredStatus bool) (func(packs map[restic.ID]int64) map[restic.ID]int64, error) {
typeData := ""
if filteredStatus {
typeData = "filtered "
}
switch {
case opts.ReadData:
return func(packs map[restic.ID]int64) map[restic.ID]int64 {
printer.P("read all %sdata", typeData)
return packs
}, nil
case opts.ReadDataSubset != "":
dataSubset, err := stringToIntSlice(opts.ReadDataSubset)
if err == nil {
bucket := dataSubset[0]
totalBuckets := dataSubset[1]
return func(packs map[restic.ID]int64) map[restic.ID]int64 {
packCount := uint64(len(packs))
packs = selectPacksByBucket(packs, bucket, totalBuckets)
printer.P("read group #%d of %d %sdata packs (out of total %d packs in %d groups", bucket, len(packs), typeData, packCount, totalBuckets)
return packs
}, nil
} else if strings.HasSuffix(opts.ReadDataSubset, "%") {
percentage, err := parsePercentage(opts.ReadDataSubset)
if err != nil {
return nil, err
}
return func(packs map[restic.ID]int64) map[restic.ID]int64 {
printer.P("read %.1f%% of %spackfiles", percentage, typeData)
return selectRandomPacksByPercentage(packs, percentage)
}, nil
}
repoSize := int64(0)
return func(packs map[restic.ID]int64) map[restic.ID]int64 {
for _, size := range packs {
repoSize += size
}
subsetSize, _ := ui.ParseBytes(opts.ReadDataSubset)
if subsetSize > repoSize {
subsetSize = repoSize
}
if repoSize > 0 {
packs = selectRandomPacksByFileSize(packs, subsetSize, repoSize)
}
percentage := float64(subsetSize) / float64(repoSize) * 100.0
if repoSize == 0 {
percentage = 100
}
printer.P("read %d bytes (%.1f%%) of %sdata packs\n", subsetSize, percentage, typeData)
return packs
}, nil
}
return nil, nil
}
// selectPacksByBucket selects subsets of packs by ranges of buckets.
func selectPacksByBucket(allPacks map[restic.ID]int64, bucket, totalBuckets uint) map[restic.ID]int64 {
packs := make(map[restic.ID]int64)
@@ -528,6 +547,10 @@ func (*jsonErrorPrinter) NewCounter(_ string) *progress.Counter {
return nil
}
func (*jsonErrorPrinter) NewCounterTerminalOnly(_ string) *progress.Counter {
return nil
}
func (p *jsonErrorPrinter) E(msg string, args ...interface{}) {
status := checkError{
MessageType: "error",
@@ -537,5 +560,6 @@ func (p *jsonErrorPrinter) E(msg string, args ...interface{}) {
}
func (*jsonErrorPrinter) S(_ string, _ ...interface{}) {}
func (*jsonErrorPrinter) P(_ string, _ ...interface{}) {}
func (*jsonErrorPrinter) PT(_ string, _ ...interface{}) {}
func (*jsonErrorPrinter) V(_ string, _ ...interface{}) {}
func (*jsonErrorPrinter) VV(_ string, _ ...interface{}) {}

View File

@@ -1,39 +1,101 @@
package main
import (
"bytes"
"context"
"strings"
"testing"
"github.com/restic/restic/internal/global"
rtest "github.com/restic/restic/internal/test"
"github.com/restic/restic/internal/ui/termstatus"
)
func testRunCheck(t testing.TB, gopts GlobalOptions) {
func testRunCheck(t testing.TB, gopts global.Options) {
t.Helper()
output, err := testRunCheckOutput(gopts, true)
output, err := testRunCheckOutput(t, gopts, true)
if err != nil {
t.Error(output)
t.Fatalf("unexpected error: %+v", err)
}
}
func testRunCheckMustFail(t testing.TB, gopts GlobalOptions) {
func testRunCheckMustFail(t testing.TB, gopts global.Options) {
t.Helper()
_, err := testRunCheckOutput(gopts, false)
_, err := testRunCheckOutput(t, gopts, false)
rtest.Assert(t, err != nil, "expected non nil error after check of damaged repository")
}
func testRunCheckOutput(gopts GlobalOptions, checkUnused bool) (string, error) {
buf := bytes.NewBuffer(nil)
gopts.stdout = buf
err := withTermStatus(gopts, func(ctx context.Context, term *termstatus.Terminal) error {
func testRunCheckOutput(t testing.TB, gopts global.Options, checkUnused bool) (string, error) {
buf, err := withCaptureStdout(t, gopts, func(ctx context.Context, gopts global.Options) error {
opts := CheckOptions{
ReadData: true,
CheckUnused: checkUnused,
}
_, err := runCheck(context.TODO(), opts, gopts, nil, term)
_, err := runCheck(context.TODO(), opts, gopts, nil, gopts.Term)
return err
})
return buf.String(), err
}
func testRunCheckOutputWithOpts(t testing.TB, gopts global.Options, opts CheckOptions, args []string) (string, error) {
buf, err := withCaptureStdout(t, gopts, func(ctx context.Context, gopts global.Options) error {
gopts.Verbosity = 2
_, err := runCheck(context.TODO(), opts, gopts, args, gopts.Term)
return err
})
return buf.String(), err
}
func TestCheckWithSnaphotFilter(t *testing.T) {
testCases := []struct {
opts CheckOptions
args []string
expectedOutput string
}{
{ // full --read-data, all snapshots
CheckOptions{ReadData: true},
nil,
"4 / 4 packs",
},
{ // full --read-data, all snapshots
CheckOptions{ReadData: true},
nil,
"2 / 2 snapshots",
},
{ // full --read-data, latest snapshot
CheckOptions{ReadData: true},
[]string{"latest"},
"2 / 2 packs",
},
{ // full --read-data, latest snapshot
CheckOptions{ReadData: true},
[]string{"latest"},
"1 / 1 snapshots",
},
{ // --read-data-subset, latest snapshot
CheckOptions{ReadDataSubset: "1%"},
[]string{"latest"},
"1 / 1 packs",
},
{ // --read-data-subset, latest snapshot
CheckOptions{ReadDataSubset: "1%"},
[]string{"latest"},
"filtered",
},
}
env, cleanup := withTestEnvironment(t)
defer cleanup()
testSetupBackupData(t, env)
opts := BackupOptions{}
testRunBackup(t, env.testdata+"/0", []string{"for_cmd_ls"}, opts, env.gopts)
testRunBackup(t, env.testdata+"/0", []string{"0/9"}, opts, env.gopts)
for _, testCase := range testCases {
output, err := testRunCheckOutputWithOpts(t, env.gopts, testCase.opts, testCase.args)
rtest.OK(t, err)
hasOutput := strings.Contains(output, testCase.expectedOutput)
rtest.Assert(t, hasOutput, `expected to find substring %q, but did not find it`, testCase.expectedOutput)
}
}

View File

@@ -9,6 +9,7 @@ import (
"testing"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/restic"
rtest "github.com/restic/restic/internal/test"
"github.com/restic/restic/internal/ui/progress"
@@ -202,7 +203,7 @@ func TestPrepareCheckCache(t *testing.T) {
err := os.Remove(tmpDirBase)
rtest.OK(t, err)
}
gopts := GlobalOptions{CacheDir: tmpDirBase}
gopts := global.Options{CacheDir: tmpDirBase}
cleanup := prepareCheckCache(testCase.opts, &gopts, &progress.NoopPrinter{})
files, err := os.ReadDir(tmpDirBase)
rtest.OK(t, err)
@@ -232,7 +233,7 @@ func TestPrepareCheckCache(t *testing.T) {
}
func TestPrepareDefaultCheckCache(t *testing.T) {
gopts := GlobalOptions{CacheDir: ""}
gopts := global.Options{CacheDir: ""}
cleanup := prepareCheckCache(CheckOptions{}, &gopts, &progress.NoopPrinter{})
_, err := os.ReadDir(gopts.CacheDir)
rtest.OK(t, err)

View File

@@ -3,18 +3,24 @@ package main
import (
"context"
"fmt"
"iter"
"sync"
"time"
"github.com/restic/restic/internal/data"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
"golang.org/x/sync/errgroup"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/ui/progress"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
func newCopyCommand() *cobra.Command {
func newCopyCommand(globalOptions *global.Options) *cobra.Command {
var opts CopyOptions
cmd := &cobra.Command{
Use: "copy [flags] [snapshotID ...]",
@@ -46,7 +52,8 @@ Exit status is 12 if the password is incorrect.
GroupID: cmdGroupDefault,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runCopy(cmd.Context(), opts, globalOptions, args)
finalizeSnapshotFilter(&opts.SnapshotFilter)
return runCopy(cmd.Context(), opts, *globalOptions, args, globalOptions.Term)
},
}
@@ -56,17 +63,51 @@ Exit status is 12 if the password is incorrect.
// CopyOptions bundles all options for the copy command.
type CopyOptions struct {
secondaryRepoOptions
restic.SnapshotFilter
global.SecondaryRepoOptions
data.SnapshotFilter
}
func (opts *CopyOptions) AddFlags(f *pflag.FlagSet) {
opts.secondaryRepoOptions.AddFlags(f, "destination", "to copy snapshots from")
opts.SecondaryRepoOptions.AddFlags(f, "destination", "to copy snapshots from")
initMultiSnapshotFilter(f, &opts.SnapshotFilter, true)
}
func runCopy(ctx context.Context, opts CopyOptions, gopts GlobalOptions, args []string) error {
secondaryGopts, isFromRepo, err := fillSecondaryGlobalOpts(ctx, opts.secondaryRepoOptions, gopts, "destination")
// collectAllSnapshots: select all snapshot trees to be copied
func collectAllSnapshots(ctx context.Context, opts CopyOptions,
srcSnapshotLister restic.Lister, srcRepo restic.Repository,
dstSnapshotByOriginal map[restic.ID][]*data.Snapshot, args []string, printer progress.Printer,
) iter.Seq[*data.Snapshot] {
return func(yield func(*data.Snapshot) bool) {
for sn := range FindFilteredSnapshots(ctx, srcSnapshotLister, srcRepo, &opts.SnapshotFilter, args, printer) {
// check whether the destination has a snapshot with the same persistent ID which has similar snapshot fields
srcOriginal := *sn.ID()
if sn.Original != nil {
srcOriginal = *sn.Original
}
if originalSns, ok := dstSnapshotByOriginal[srcOriginal]; ok {
isCopy := false
for _, originalSn := range originalSns {
if similarSnapshots(originalSn, sn) {
printer.V("\n%v", sn)
printer.V("skipping source snapshot %s, was already copied to snapshot %s", sn.ID().Str(), originalSn.ID().Str())
isCopy = true
break
}
}
if isCopy {
continue
}
}
if !yield(sn) {
return
}
}
}
}
func runCopy(ctx context.Context, opts CopyOptions, gopts global.Options, args []string, term ui.Terminal) error {
printer := ui.NewProgressPrinter(false, gopts.Verbosity, term)
secondaryGopts, isFromRepo, err := opts.SecondaryRepoOptions.FillGlobalOpts(ctx, gopts, "destination")
if err != nil {
return err
}
@@ -75,13 +116,13 @@ func runCopy(ctx context.Context, opts CopyOptions, gopts GlobalOptions, args []
gopts, secondaryGopts = secondaryGopts, gopts
}
ctx, srcRepo, unlock, err := openWithReadLock(ctx, gopts, gopts.NoLock)
ctx, srcRepo, unlock, err := openWithReadLock(ctx, gopts, gopts.NoLock, printer)
if err != nil {
return err
}
defer unlock()
ctx, dstRepo, unlock, err := openWithAppendLock(ctx, secondaryGopts, false)
ctx, dstRepo, unlock, err := openWithAppendLock(ctx, secondaryGopts, false, printer)
if err != nil {
return err
}
@@ -98,18 +139,16 @@ func runCopy(ctx context.Context, opts CopyOptions, gopts GlobalOptions, args []
}
debug.Log("Loading source index")
bar := newIndexProgress(gopts.Quiet, gopts.JSON)
if err := srcRepo.LoadIndex(ctx, bar); err != nil {
if err := srcRepo.LoadIndex(ctx, printer); err != nil {
return err
}
bar = newIndexProgress(gopts.Quiet, gopts.JSON)
debug.Log("Loading destination index")
if err := dstRepo.LoadIndex(ctx, bar); err != nil {
if err := dstRepo.LoadIndex(ctx, printer); err != nil {
return err
}
dstSnapshotByOriginal := make(map[restic.ID][]*restic.Snapshot)
for sn := range FindFilteredSnapshots(ctx, dstSnapshotLister, dstRepo, &opts.SnapshotFilter, nil) {
dstSnapshotByOriginal := make(map[restic.ID][]*data.Snapshot)
for sn := range FindFilteredSnapshots(ctx, dstSnapshotLister, dstRepo, &opts.SnapshotFilter, nil, printer) {
if sn.Original != nil && !sn.Original.IsNull() {
dstSnapshotByOriginal[*sn.Original] = append(dstSnapshotByOriginal[*sn.Original], sn)
}
@@ -120,53 +159,16 @@ func runCopy(ctx context.Context, opts CopyOptions, gopts GlobalOptions, args []
return ctx.Err()
}
// remember already processed trees across all snapshots
visitedTrees := restic.NewIDSet()
selectedSnapshots := collectAllSnapshots(ctx, opts, srcSnapshotLister, srcRepo, dstSnapshotByOriginal, args, printer)
for sn := range FindFilteredSnapshots(ctx, srcSnapshotLister, srcRepo, &opts.SnapshotFilter, args) {
// check whether the destination has a snapshot with the same persistent ID which has similar snapshot fields
srcOriginal := *sn.ID()
if sn.Original != nil {
srcOriginal = *sn.Original
}
if originalSns, ok := dstSnapshotByOriginal[srcOriginal]; ok {
isCopy := false
for _, originalSn := range originalSns {
if similarSnapshots(originalSn, sn) {
Verboseff("\n%v\n", sn)
Verboseff("skipping source snapshot %s, was already copied to snapshot %s\n", sn.ID().Str(), originalSn.ID().Str())
isCopy = true
break
}
}
if isCopy {
continue
}
}
Verbosef("\n%v\n", sn)
Verbosef(" copy started, this may take a while...\n")
if err := copyTree(ctx, srcRepo, dstRepo, visitedTrees, *sn.Tree, gopts.Quiet); err != nil {
return err
}
debug.Log("tree copied")
// save snapshot
sn.Parent = nil // Parent does not have relevance in the new repo.
// Use Original as a persistent snapshot ID
if sn.Original == nil {
sn.Original = sn.ID()
}
newID, err := restic.SaveSnapshot(ctx, dstRepo, sn)
if err != nil {
return err
}
Verbosef("snapshot %s saved\n", newID.Str())
if err := copyTreeBatched(ctx, srcRepo, dstRepo, selectedSnapshots, printer); err != nil {
return err
}
return ctx.Err()
}
func similarSnapshots(sna *restic.Snapshot, snb *restic.Snapshot) bool {
func similarSnapshots(sna *data.Snapshot, snb *data.Snapshot) bool {
// everything except Parent and Original must match
if !sna.Time.Equal(snb.Time) || !sna.Tree.Equal(*snb.Tree) || sna.Hostname != snb.Hostname ||
sna.Username != snb.Username || sna.UID != snb.UID || sna.GID != snb.GID ||
@@ -185,72 +187,158 @@ func similarSnapshots(sna *restic.Snapshot, snb *restic.Snapshot) bool {
return true
}
func copyTree(ctx context.Context, srcRepo restic.Repository, dstRepo restic.Repository,
visitedTrees restic.IDSet, rootTreeID restic.ID, quiet bool) error {
// copyTreeBatched copies multiple snapshots in one go. Snapshots are written after
// data equivalent to at least 10 packfiles was written.
func copyTreeBatched(ctx context.Context, srcRepo restic.Repository, dstRepo restic.Repository,
selectedSnapshots iter.Seq[*data.Snapshot], printer progress.Printer) error {
wg, wgCtx := errgroup.WithContext(ctx)
// remember already processed trees across all snapshots
visitedTrees := srcRepo.NewAssociatedBlobSet()
treeStream := restic.StreamTrees(wgCtx, wg, srcRepo, restic.IDs{rootTreeID}, func(treeID restic.ID) bool {
visited := visitedTrees.Has(treeID)
visitedTrees.Insert(treeID)
return visited
}, nil)
targetSize := uint64(dstRepo.PackSize()) * 100
minDuration := 1 * time.Minute
copyBlobs := restic.NewBlobSet()
packList := restic.NewIDSet()
// use pull-based iterator to allow iteration in multiple steps
next, stop := iter.Pull(selectedSnapshots)
defer stop()
enqueue := func(h restic.BlobHandle) {
pb := srcRepo.LookupBlob(h.Type, h.ID)
copyBlobs.Insert(h)
for _, p := range pb {
packList.Insert(p.PackID)
for {
var batch []*data.Snapshot
batchSize := uint64(0)
startTime := time.Now()
// call WithBlobUploader() once and then loop over all selectedSnapshots
err := dstRepo.WithBlobUploader(ctx, func(ctx context.Context, uploader restic.BlobSaverWithAsync) error {
for batchSize < targetSize || time.Since(startTime) < minDuration {
sn, ok := next()
if !ok {
break
}
batch = append(batch, sn)
printer.P("\n%v", sn)
printer.P(" copy started, this may take a while...")
sizeBlobs, err := copyTree(ctx, srcRepo, dstRepo, visitedTrees, *sn.Tree, printer, uploader)
if err != nil {
return err
}
debug.Log("tree copied")
batchSize += sizeBlobs
}
return nil
})
if err != nil {
return err
}
// if no snapshots were processed in this batch, we're done
if len(batch) == 0 {
break
}
// add a newline to separate saved snapshot messages from the other messages
if len(batch) > 1 {
printer.P("")
}
// save all the snapshots
for _, sn := range batch {
err := copySaveSnapshot(ctx, sn, dstRepo, printer)
if err != nil {
return err
}
}
}
wg.Go(func() error {
for tree := range treeStream {
if tree.Error != nil {
return fmt.Errorf("LoadTree(%v) returned error %v", tree.ID.Str(), tree.Error)
}
return nil
}
// Do we already have this tree blob?
treeHandle := restic.BlobHandle{ID: tree.ID, Type: restic.TreeBlob}
if _, ok := dstRepo.LookupBlobSize(treeHandle.Type, treeHandle.ID); !ok {
// copy raw tree bytes to avoid problems if the serialization changes
enqueue(treeHandle)
}
func copyTree(ctx context.Context, srcRepo restic.Repository, dstRepo restic.Repository,
visitedTrees restic.AssociatedBlobSet, rootTreeID restic.ID, printer progress.Printer, uploader restic.BlobSaverWithAsync) (uint64, error) {
for _, entry := range tree.Nodes {
// Recursion into directories is handled by StreamTrees
// Copy the blobs for this file.
for _, blobID := range entry.Content {
h := restic.BlobHandle{Type: restic.DataBlob, ID: blobID}
if _, ok := dstRepo.LookupBlobSize(h.Type, h.ID); !ok {
enqueue(h)
}
}
copyBlobs := srcRepo.NewAssociatedBlobSet()
packList := restic.NewIDSet()
var lock sync.Mutex
enqueue := func(h restic.BlobHandle) {
lock.Lock()
defer lock.Unlock()
if _, ok := dstRepo.LookupBlobSize(h.Type, h.ID); !ok {
pb := srcRepo.LookupBlob(h.Type, h.ID)
copyBlobs.Insert(h)
for _, p := range pb {
packList.Insert(p.PackID)
}
}
}
err := data.StreamTrees(ctx, srcRepo, restic.IDs{rootTreeID}, nil, func(treeID restic.ID) bool {
handle := restic.BlobHandle{ID: treeID, Type: restic.TreeBlob}
visited := visitedTrees.Has(handle)
visitedTrees.Insert(handle)
return visited
}, func(treeID restic.ID, err error, nodes data.TreeNodeIterator) error {
if err != nil {
return fmt.Errorf("LoadTree(%v) returned error %v", treeID.Str(), err)
}
// copy raw tree bytes to avoid problems if the serialization changes
enqueue(restic.BlobHandle{ID: treeID, Type: restic.TreeBlob})
for item := range nodes {
if item.Error != nil {
return item.Error
}
// Recursion into directories is handled by StreamTrees
// Copy the blobs for this file.
for _, blobID := range item.Node.Content {
enqueue(restic.BlobHandle{Type: restic.DataBlob, ID: blobID})
}
}
return nil
})
err := wg.Wait()
if err != nil {
return 0, err
}
sizeBlobs := copyStats(srcRepo, copyBlobs, packList, printer)
bar := printer.NewCounter("packs copied")
err = repository.CopyBlobs(ctx, srcRepo, dstRepo, uploader, packList, copyBlobs, bar, printer.P)
if err != nil {
return 0, errors.Fatalf("%s", err)
}
return sizeBlobs, nil
}
// copyStats: print statistics for the blobs to be copied
func copyStats(srcRepo restic.Repository, copyBlobs restic.AssociatedBlobSet, packList restic.IDSet, printer progress.Printer) uint64 {
// count and size
countBlobs := 0
sizeBlobs := uint64(0)
for blob := range copyBlobs.Keys() {
for _, blob := range srcRepo.LookupBlob(blob.Type, blob.ID) {
countBlobs++
sizeBlobs += uint64(blob.Length)
break
}
}
printer.V(" copy %d blobs with disk size %s in %d packfiles\n",
countBlobs, ui.FormatBytes(uint64(sizeBlobs)), len(packList))
return sizeBlobs
}
func copySaveSnapshot(ctx context.Context, sn *data.Snapshot, dstRepo restic.Repository, printer progress.Printer) error {
sn.Parent = nil // Parent does not have relevance in the new repo.
// Use Original as a persistent snapshot ID
if sn.Original == nil {
sn.Original = sn.ID()
}
newID, err := data.SaveSnapshot(ctx, dstRepo, sn)
if err != nil {
return err
}
bar := newProgressMax(!quiet, uint64(len(packList)), "packs copied")
_, err = repository.Repack(
ctx,
srcRepo,
dstRepo,
packList,
copyBlobs,
bar,
func(msg string, args ...interface{}) { fmt.Printf(msg+"\n", args...) },
)
bar.Done()
if err != nil {
return errors.Fatal(err.Error())
}
printer.P("snapshot %s saved, copied from source snapshot %s", newID.Str(), sn.ID().Str())
return nil
}

View File

@@ -6,23 +6,28 @@ import (
"path/filepath"
"testing"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/restic"
rtest "github.com/restic/restic/internal/test"
"github.com/restic/restic/internal/ui"
)
func testRunCopy(t testing.TB, srcGopts GlobalOptions, dstGopts GlobalOptions) {
func testRunCopy(t testing.TB, srcGopts global.Options, dstGopts global.Options) {
gopts := srcGopts
gopts.Repo = dstGopts.Repo
gopts.password = dstGopts.password
gopts.Password = dstGopts.Password
gopts.InsecureNoPassword = dstGopts.InsecureNoPassword
copyOpts := CopyOptions{
secondaryRepoOptions: secondaryRepoOptions{
SecondaryRepoOptions: global.SecondaryRepoOptions{
Repo: srcGopts.Repo,
password: srcGopts.password,
Password: srcGopts.Password,
InsecureNoPassword: srcGopts.InsecureNoPassword,
},
}
rtest.OK(t, runCopy(context.TODO(), copyOpts, gopts, nil))
rtest.OK(t, withTermStatus(t, gopts, func(ctx context.Context, gopts global.Options) error {
return runCopy(context.TODO(), copyOpts, gopts, nil, gopts.Term)
}))
}
func TestCopy(t *testing.T) {
@@ -45,8 +50,8 @@ func TestCopy(t *testing.T) {
copiedSnapshotIDs := testListSnapshots(t, env2.gopts, 3)
// Check that the copies size seems reasonable
stat := dirStats(env.repo)
stat2 := dirStats(env2.repo)
stat := dirStats(t, env.repo)
stat2 := dirStats(t, env2.repo)
sizeDiff := int64(stat.size) - int64(stat2.size)
if sizeDiff < 0 {
sizeDiff = -sizeDiff
@@ -69,7 +74,7 @@ func TestCopy(t *testing.T) {
testRunRestore(t, env2.gopts, restoredir, snapshotID.String())
foundMatch := false
for cmpdir := range origRestores {
diff := directoriesContentsDiff(restoredir, cmpdir)
diff := directoriesContentsDiff(t, restoredir, cmpdir)
if diff == "" {
delete(origRestores, cmpdir)
foundMatch = true
@@ -80,6 +85,41 @@ func TestCopy(t *testing.T) {
}
rtest.Assert(t, len(origRestores) == 0, "found not copied snapshots")
// check that snapshots were properly batched while copying
_, _, countBlobs := testPackAndBlobCounts(t, env.gopts)
countTreePacksDst, countDataPacksDst, countBlobsDst := testPackAndBlobCounts(t, env2.gopts)
rtest.Equals(t, countBlobs, countBlobsDst, "expected blob count in boths repos to be equal")
rtest.Equals(t, countTreePacksDst, 1, "expected 1 tree packfile")
rtest.Equals(t, countDataPacksDst, 1, "expected 1 data packfile")
}
func testPackAndBlobCounts(t testing.TB, gopts global.Options) (countTreePacks int, countDataPacks int, countBlobs int) {
rtest.OK(t, withTermStatus(t, gopts, func(ctx context.Context, gopts global.Options) error {
printer := ui.NewProgressPrinter(gopts.JSON, gopts.Verbosity, gopts.Term)
_, repo, unlock, err := openWithReadLock(ctx, gopts, false, printer)
rtest.OK(t, err)
defer unlock()
rtest.OK(t, repo.List(context.TODO(), restic.PackFile, func(id restic.ID, size int64) error {
blobs, _, err := repo.ListPack(context.TODO(), id, size)
rtest.OK(t, err)
rtest.Assert(t, len(blobs) > 0, "a packfile should contain at least one blob")
switch blobs[0].Type {
case restic.TreeBlob:
countTreePacks++
case restic.DataBlob:
countDataPacks++
}
countBlobs += len(blobs)
return nil
}))
return nil
}))
return countTreePacks, countDataPacks, countBlobs
}
func TestCopyIncremental(t *testing.T) {
@@ -142,7 +182,7 @@ func TestCopyToEmptyPassword(t *testing.T) {
defer cleanup()
env2, cleanup2 := withTestEnvironment(t)
defer cleanup2()
env2.gopts.password = ""
env2.gopts.Password = ""
env2.gopts.InsecureNoPassword = true
testSetupBackupData(t, env)

View File

@@ -1,5 +1,4 @@
//go:build debug
// +build debug
package main
@@ -22,32 +21,36 @@ import (
"golang.org/x/sync/errgroup"
"github.com/restic/restic/internal/crypto"
"github.com/restic/restic/internal/data"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/repository/index"
"github.com/restic/restic/internal/repository/pack"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/ui/progress"
)
func registerDebugCommand(cmd *cobra.Command) {
func registerDebugCommand(cmd *cobra.Command, globalOptions *global.Options) {
cmd.AddCommand(
newDebugCommand(),
newDebugCommand(globalOptions),
)
}
func newDebugCommand() *cobra.Command {
func newDebugCommand(globalOptions *global.Options) *cobra.Command {
cmd := &cobra.Command{
Use: "debug",
Short: "Debug commands",
GroupID: cmdGroupDefault,
DisableAutoGenTag: true,
}
cmd.AddCommand(newDebugDumpCommand())
cmd.AddCommand(newDebugExamineCommand())
cmd.AddCommand(newDebugDumpCommand(globalOptions))
cmd.AddCommand(newDebugExamineCommand(globalOptions))
return cmd
}
func newDebugDumpCommand() *cobra.Command {
func newDebugDumpCommand(globalOptions *global.Options) *cobra.Command {
cmd := &cobra.Command{
Use: "dump [indexes|snapshots|all|packs]",
Short: "Dump data structures",
@@ -66,13 +69,13 @@ Exit status is 12 if the password is incorrect.
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runDebugDump(cmd.Context(), globalOptions, args)
return runDebugDump(cmd.Context(), *globalOptions, args, globalOptions.Term)
},
}
return cmd
}
func newDebugExamineCommand() *cobra.Command {
func newDebugExamineCommand(globalOptions *global.Options) *cobra.Command {
var opts DebugExamineOptions
cmd := &cobra.Command{
@@ -80,7 +83,7 @@ func newDebugExamineCommand() *cobra.Command {
Short: "Examine a pack file",
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runDebugExamine(cmd.Context(), globalOptions, opts, args)
return runDebugExamine(cmd.Context(), *globalOptions, opts, args, globalOptions.Term)
},
}
@@ -113,7 +116,7 @@ func prettyPrintJSON(wr io.Writer, item interface{}) error {
}
func debugPrintSnapshots(ctx context.Context, repo *repository.Repository, wr io.Writer) error {
return restic.ForAllSnapshots(ctx, repo, repo, nil, func(id restic.ID, snapshot *restic.Snapshot, err error) error {
return data.ForAllSnapshots(ctx, repo, repo, nil, func(id restic.ID, snapshot *data.Snapshot, err error) error {
if err != nil {
return err
}
@@ -141,13 +144,13 @@ type Blob struct {
Offset uint `json:"offset"`
}
func printPacks(ctx context.Context, repo *repository.Repository, wr io.Writer) error {
func printPacks(ctx context.Context, repo *repository.Repository, wr io.Writer, printer progress.Printer) error {
var m sync.Mutex
return restic.ParallelList(ctx, repo, restic.PackFile, repo.Connections(), func(ctx context.Context, id restic.ID, size int64) error {
blobs, _, err := repo.ListPack(ctx, id, size)
if err != nil {
Warnf("error for pack %v: %v\n", id.Str(), err)
printer.E("error for pack %v: %v", id.Str(), err)
return nil
}
@@ -170,9 +173,9 @@ func printPacks(ctx context.Context, repo *repository.Repository, wr io.Writer)
})
}
func dumpIndexes(ctx context.Context, repo restic.ListerLoaderUnpacked, wr io.Writer) error {
func dumpIndexes(ctx context.Context, repo restic.ListerLoaderUnpacked, wr io.Writer, printer progress.Printer) error {
return index.ForAllIndexes(ctx, repo, repo, func(id restic.ID, idx *index.Index, err error) error {
Printf("index_id: %v\n", id)
printer.S("index_id: %v", id)
if err != nil {
return err
}
@@ -181,12 +184,14 @@ func dumpIndexes(ctx context.Context, repo restic.ListerLoaderUnpacked, wr io.Wr
})
}
func runDebugDump(ctx context.Context, gopts GlobalOptions, args []string) error {
func runDebugDump(ctx context.Context, gopts global.Options, args []string, term ui.Terminal) error {
printer := ui.NewProgressPrinter(false, gopts.Verbosity, term)
if len(args) != 1 {
return errors.Fatal("type not specified")
}
ctx, repo, unlock, err := openWithReadLock(ctx, gopts, gopts.NoLock)
ctx, repo, unlock, err := openWithReadLock(ctx, gopts, gopts.NoLock, printer)
if err != nil {
return err
}
@@ -196,20 +201,20 @@ func runDebugDump(ctx context.Context, gopts GlobalOptions, args []string) error
switch tpe {
case "indexes":
return dumpIndexes(ctx, repo, globalOptions.stdout)
return dumpIndexes(ctx, repo, gopts.Term.OutputWriter(), printer)
case "snapshots":
return debugPrintSnapshots(ctx, repo, globalOptions.stdout)
return debugPrintSnapshots(ctx, repo, gopts.Term.OutputWriter())
case "packs":
return printPacks(ctx, repo, globalOptions.stdout)
return printPacks(ctx, repo, gopts.Term.OutputWriter(), printer)
case "all":
Printf("snapshots:\n")
err := debugPrintSnapshots(ctx, repo, globalOptions.stdout)
printer.S("snapshots:")
err := debugPrintSnapshots(ctx, repo, gopts.Term.OutputWriter())
if err != nil {
return err
}
Printf("\nindexes:\n")
err = dumpIndexes(ctx, repo, globalOptions.stdout)
printer.S("indexes:")
err = dumpIndexes(ctx, repo, gopts.Term.OutputWriter(), printer)
if err != nil {
return err
}
@@ -220,11 +225,11 @@ func runDebugDump(ctx context.Context, gopts GlobalOptions, args []string) error
}
}
func tryRepairWithBitflip(key *crypto.Key, input []byte, bytewise bool) []byte {
func tryRepairWithBitflip(key *crypto.Key, input []byte, bytewise bool, printer progress.Printer) []byte {
if bytewise {
Printf(" trying to repair blob by finding a broken byte\n")
printer.S(" trying to repair blob by finding a broken byte")
} else {
Printf(" trying to repair blob with single bit flip\n")
printer.S(" trying to repair blob with single bit flip")
}
ch := make(chan int)
@@ -234,7 +239,7 @@ func tryRepairWithBitflip(key *crypto.Key, input []byte, bytewise bool) []byte {
var found bool
workers := runtime.GOMAXPROCS(0)
Printf(" spinning up %d worker functions\n", runtime.GOMAXPROCS(0))
printer.S(" spinning up %d worker functions", runtime.GOMAXPROCS(0))
for i := 0; i < workers; i++ {
wg.Go(func() error {
// make a local copy of the buffer
@@ -248,9 +253,9 @@ func tryRepairWithBitflip(key *crypto.Key, input []byte, bytewise bool) []byte {
nonce, plaintext := buf[:key.NonceSize()], buf[key.NonceSize():]
plaintext, err := key.Open(plaintext[:0], nonce, plaintext, nil)
if err == nil {
Printf("\n")
Printf(" blob could be repaired by XORing byte %v with 0x%02x\n", idx, pattern)
Printf(" hash is %v\n", restic.Hash(plaintext))
printer.S("")
printer.S(" blob could be repaired by XORing byte %v with 0x%02x", idx, pattern)
printer.S(" hash is %v", restic.Hash(plaintext))
close(done)
found = true
fixed = plaintext
@@ -291,7 +296,7 @@ func tryRepairWithBitflip(key *crypto.Key, input []byte, bytewise bool) []byte {
select {
case ch <- i:
case <-done:
Printf(" done after %v\n", time.Since(start))
printer.S(" done after %v", time.Since(start))
return nil
}
@@ -301,7 +306,7 @@ func tryRepairWithBitflip(key *crypto.Key, input []byte, bytewise bool) []byte {
remaining := len(input) - i
eta := time.Duration(float64(remaining)/gps) * time.Second
Printf("\r%d byte of %d done (%.2f%%), %.0f byte per second, ETA %v",
printer.S("\r%d byte of %d done (%.2f%%), %.0f byte per second, ETA %v",
i, len(input), float32(i)/float32(len(input))*100, gps, eta)
info = time.Now()
}
@@ -314,7 +319,7 @@ func tryRepairWithBitflip(key *crypto.Key, input []byte, bytewise bool) []byte {
}
if !found {
Printf("\n blob could not be repaired\n")
printer.S("\n blob could not be repaired")
}
return fixed
}
@@ -335,7 +340,7 @@ func decryptUnsigned(k *crypto.Key, buf []byte) []byte {
return out
}
func loadBlobs(ctx context.Context, opts DebugExamineOptions, repo restic.Repository, packID restic.ID, list []restic.Blob) error {
func loadBlobs(ctx context.Context, opts DebugExamineOptions, repo restic.Repository, packID restic.ID, list []restic.Blob, printer progress.Printer) error {
dec, err := zstd.NewReader(nil)
if err != nil {
panic(err)
@@ -347,17 +352,11 @@ func loadBlobs(ctx context.Context, opts DebugExamineOptions, repo restic.Reposi
return err
}
wg, ctx := errgroup.WithContext(ctx)
if opts.ReuploadBlobs {
repo.StartPackUploader(ctx, wg)
}
wg.Go(func() error {
err = repo.WithBlobUploader(ctx, func(ctx context.Context, uploader restic.BlobSaverWithAsync) error {
for _, blob := range list {
Printf(" loading blob %v at %v (length %v)\n", blob.ID, blob.Offset, blob.Length)
printer.S(" loading blob %v at %v (length %v)", blob.ID, blob.Offset, blob.Length)
if int(blob.Offset+blob.Length) > len(pack) {
Warnf("skipping truncated blob\n")
printer.E("skipping truncated blob")
continue
}
buf := pack[blob.Offset : blob.Offset+blob.Length]
@@ -368,16 +367,16 @@ func loadBlobs(ctx context.Context, opts DebugExamineOptions, repo restic.Reposi
outputPrefix := ""
filePrefix := ""
if err != nil {
Warnf("error decrypting blob: %v\n", err)
printer.E("error decrypting blob: %v", err)
if opts.TryRepair || opts.RepairByte {
plaintext = tryRepairWithBitflip(key, buf, opts.RepairByte)
plaintext = tryRepairWithBitflip(key, buf, opts.RepairByte, printer)
}
if plaintext != nil {
outputPrefix = "repaired "
filePrefix = "repaired-"
} else {
plaintext = decryptUnsigned(key, buf)
err = storePlainBlob(blob.ID, "damaged-", plaintext)
err = storePlainBlob(blob.ID, "damaged-", plaintext, printer)
if err != nil {
return err
}
@@ -388,7 +387,7 @@ func loadBlobs(ctx context.Context, opts DebugExamineOptions, repo restic.Reposi
if blob.IsCompressed() {
decompressed, err := dec.DecodeAll(plaintext, nil)
if err != nil {
Printf(" failed to decompress blob %v\n", blob.ID)
printer.S(" failed to decompress blob %v", blob.ID)
}
if decompressed != nil {
plaintext = decompressed
@@ -398,37 +397,32 @@ func loadBlobs(ctx context.Context, opts DebugExamineOptions, repo restic.Reposi
id := restic.Hash(plaintext)
var prefix string
if !id.Equal(blob.ID) {
Printf(" successfully %vdecrypted blob (length %v), hash is %v, ID does not match, wanted %v\n", outputPrefix, len(plaintext), id, blob.ID)
printer.S(" successfully %vdecrypted blob (length %v), hash is %v, ID does not match, wanted %v", outputPrefix, len(plaintext), id, blob.ID)
prefix = "wrong-hash-"
} else {
Printf(" successfully %vdecrypted blob (length %v), hash is %v, ID matches\n", outputPrefix, len(plaintext), id)
printer.S(" successfully %vdecrypted blob (length %v), hash is %v, ID matches", outputPrefix, len(plaintext), id)
prefix = "correct-"
}
if opts.ExtractPack {
err = storePlainBlob(id, filePrefix+prefix, plaintext)
err = storePlainBlob(id, filePrefix+prefix, plaintext, printer)
if err != nil {
return err
}
}
if opts.ReuploadBlobs {
_, _, _, err := repo.SaveBlob(ctx, blob.Type, plaintext, id, true)
_, _, _, err := uploader.SaveBlob(ctx, blob.Type, plaintext, id, true)
if err != nil {
return err
}
Printf(" uploaded %v %v\n", blob.Type, id)
printer.S(" uploaded %v %v", blob.Type, id)
}
}
if opts.ReuploadBlobs {
return repo.Flush(ctx)
}
return nil
})
return wg.Wait()
return err
}
func storePlainBlob(id restic.ID, prefix string, plain []byte) error {
func storePlainBlob(id restic.ID, prefix string, plain []byte, printer progress.Printer) error {
filename := fmt.Sprintf("%s%s.bin", prefix, id)
f, err := os.Create(filename)
if err != nil {
@@ -446,16 +440,18 @@ func storePlainBlob(id restic.ID, prefix string, plain []byte) error {
return err
}
Printf("decrypt of blob %v stored at %v\n", id, filename)
printer.S("decrypt of blob %v stored at %v", id, filename)
return nil
}
func runDebugExamine(ctx context.Context, gopts GlobalOptions, opts DebugExamineOptions, args []string) error {
func runDebugExamine(ctx context.Context, gopts global.Options, opts DebugExamineOptions, args []string, term ui.Terminal) error {
printer := ui.NewProgressPrinter(false, gopts.Verbosity, term)
if opts.ExtractPack && gopts.NoLock {
return fmt.Errorf("--extract-pack and --no-lock are mutually exclusive")
}
ctx, repo, unlock, err := openWithAppendLock(ctx, gopts, gopts.NoLock)
ctx, repo, unlock, err := openWithAppendLock(ctx, gopts, gopts.NoLock, printer)
if err != nil {
return err
}
@@ -467,7 +463,7 @@ func runDebugExamine(ctx context.Context, gopts GlobalOptions, opts DebugExamine
if err != nil {
id, err = restic.Find(ctx, repo, restic.PackFile, name)
if err != nil {
Warnf("error: %v\n", err)
printer.E("error: %v", err)
continue
}
}
@@ -478,16 +474,15 @@ func runDebugExamine(ctx context.Context, gopts GlobalOptions, opts DebugExamine
return errors.Fatal("no pack files to examine")
}
bar := newIndexProgress(gopts.Quiet, gopts.JSON)
err = repo.LoadIndex(ctx, bar)
err = repo.LoadIndex(ctx, printer)
if err != nil {
return err
}
for _, id := range ids {
err := examinePack(ctx, opts, repo, id)
err := examinePack(ctx, opts, repo, id, printer)
if err != nil {
Warnf("error: %v\n", err)
printer.E("error: %v", err)
}
if err == context.Canceled {
break
@@ -496,24 +491,24 @@ func runDebugExamine(ctx context.Context, gopts GlobalOptions, opts DebugExamine
return nil
}
func examinePack(ctx context.Context, opts DebugExamineOptions, repo restic.Repository, id restic.ID) error {
Printf("examine %v\n", id)
func examinePack(ctx context.Context, opts DebugExamineOptions, repo restic.Repository, id restic.ID, printer progress.Printer) error {
printer.S("examine %v", id)
buf, err := repo.LoadRaw(ctx, restic.PackFile, id)
// also process damaged pack files
if buf == nil {
return err
}
Printf(" file size is %v\n", len(buf))
printer.S(" file size is %v", len(buf))
gotID := restic.Hash(buf)
if !id.Equal(gotID) {
Printf(" wanted hash %v, got %v\n", id, gotID)
printer.S(" wanted hash %v, got %v", id, gotID)
} else {
Printf(" hash for file content matches\n")
printer.S(" hash for file content matches")
}
Printf(" ========================================\n")
Printf(" looking for info in the indexes\n")
printer.S(" ========================================")
printer.S(" looking for info in the indexes")
blobsLoaded := false
// examine all data the indexes have for the pack file
@@ -523,32 +518,32 @@ func examinePack(ctx context.Context, opts DebugExamineOptions, repo restic.Repo
continue
}
checkPackSize(blobs, len(buf))
checkPackSize(blobs, len(buf), printer)
err = loadBlobs(ctx, opts, repo, id, blobs)
err = loadBlobs(ctx, opts, repo, id, blobs, printer)
if err != nil {
Warnf("error: %v\n", err)
printer.E("error: %v", err)
} else {
blobsLoaded = true
}
}
Printf(" ========================================\n")
Printf(" inspect the pack itself\n")
printer.S(" ========================================")
printer.S(" inspect the pack itself")
blobs, _, err := repo.ListPack(ctx, id, int64(len(buf)))
if err != nil {
return fmt.Errorf("pack %v: %v", id.Str(), err)
}
checkPackSize(blobs, len(buf))
checkPackSize(blobs, len(buf), printer)
if !blobsLoaded {
return loadBlobs(ctx, opts, repo, id, blobs)
return loadBlobs(ctx, opts, repo, id, blobs, printer)
}
return nil
}
func checkPackSize(blobs []restic.Blob, fileSize int) {
func checkPackSize(blobs []restic.Blob, fileSize int, printer progress.Printer) {
// track current size and offset
var size, offset uint64
@@ -557,9 +552,9 @@ func checkPackSize(blobs []restic.Blob, fileSize int) {
})
for _, pb := range blobs {
Printf(" %v blob %v, offset %-6d, raw length %-6d\n", pb.Type, pb.ID, pb.Offset, pb.Length)
printer.S(" %v blob %v, offset %-6d, raw length %-6d", pb.Type, pb.ID, pb.Offset, pb.Length)
if offset != uint64(pb.Offset) {
Printf(" hole in file, want offset %v, got %v\n", offset, pb.Offset)
printer.S(" hole in file, want offset %v, got %v", offset, pb.Offset)
}
offset = uint64(pb.Offset + pb.Length)
size += uint64(pb.Length)
@@ -567,8 +562,8 @@ func checkPackSize(blobs []restic.Blob, fileSize int) {
size += uint64(pack.CalculateHeaderSize(blobs))
if uint64(fileSize) != size {
Printf(" file sizes do not match: computed %v, file size is %v\n", size, fileSize)
printer.S(" file sizes do not match: computed %v, file size is %v", size, fileSize)
} else {
Printf(" file sizes match\n")
printer.S(" file sizes match")
}
}

View File

@@ -2,8 +2,11 @@
package main
import "github.com/spf13/cobra"
import (
"github.com/restic/restic/internal/global"
"github.com/spf13/cobra"
)
func registerDebugCommand(_ *cobra.Command) {
func registerDebugCommand(_ *cobra.Command, _ *global.Options) {
// No commands to register in non-debug mode
}

View File

@@ -5,17 +5,18 @@ import (
"encoding/json"
"path"
"reflect"
"sort"
"github.com/restic/restic/internal/data"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/ui"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
func newDiffCommand() *cobra.Command {
func newDiffCommand(globalOptions *global.Options) *cobra.Command {
var opts DiffOptions
cmd := &cobra.Command{
@@ -52,7 +53,7 @@ Exit status is 12 if the password is incorrect.
GroupID: cmdGroupDefault,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runDiff(cmd.Context(), opts, globalOptions, args)
return runDiff(cmd.Context(), opts, *globalOptions, args, globalOptions.Term)
},
}
@@ -69,10 +70,10 @@ func (opts *DiffOptions) AddFlags(f *pflag.FlagSet) {
f.BoolVar(&opts.ShowMetadata, "metadata", false, "print changes in metadata")
}
func loadSnapshot(ctx context.Context, be restic.Lister, repo restic.LoaderUnpacked, desc string) (*restic.Snapshot, string, error) {
sn, subfolder, err := restic.FindSnapshot(ctx, be, repo, desc)
func loadSnapshot(ctx context.Context, be restic.Lister, repo restic.LoaderUnpacked, desc string) (*data.Snapshot, string, error) {
sn, subfolder, err := data.FindSnapshot(ctx, be, repo, desc)
if err != nil {
return nil, "", errors.Fatal(err.Error())
return nil, "", errors.Fatalf("%s", err)
}
return sn, subfolder, err
}
@@ -82,6 +83,7 @@ type Comparer struct {
repo restic.BlobLoader
opts DiffOptions
printChange func(change *Change)
printError func(string, ...interface{})
}
type Change struct {
@@ -105,15 +107,15 @@ type DiffStat struct {
}
// Add adds stats information for node to s.
func (s *DiffStat) Add(node *restic.Node) {
func (s *DiffStat) Add(node *data.Node) {
if node == nil {
return
}
switch node.Type {
case restic.NodeTypeFile:
case data.NodeTypeFile:
s.Files++
case restic.NodeTypeDir:
case data.NodeTypeDir:
s.Dirs++
default:
s.Others++
@@ -121,13 +123,13 @@ func (s *DiffStat) Add(node *restic.Node) {
}
// addBlobs adds the blobs of node to s.
func addBlobs(bs restic.BlobSet, node *restic.Node) {
func addBlobs(bs restic.AssociatedBlobSet, node *data.Node) {
if node == nil {
return
}
switch node.Type {
case restic.NodeTypeFile:
case data.NodeTypeFile:
for _, blob := range node.Content {
h := restic.BlobHandle{
ID: blob,
@@ -135,7 +137,7 @@ func addBlobs(bs restic.BlobSet, node *restic.Node) {
}
bs.Insert(h)
}
case restic.NodeTypeDir:
case data.NodeTypeDir:
h := restic.BlobHandle{
ID: *node.Subtree,
Type: restic.TreeBlob,
@@ -145,18 +147,18 @@ func addBlobs(bs restic.BlobSet, node *restic.Node) {
}
type DiffStatsContainer struct {
MessageType string `json:"message_type"` // "statistics"
SourceSnapshot string `json:"source_snapshot"`
TargetSnapshot string `json:"target_snapshot"`
ChangedFiles int `json:"changed_files"`
Added DiffStat `json:"added"`
Removed DiffStat `json:"removed"`
BlobsBefore, BlobsAfter, BlobsCommon restic.BlobSet `json:"-"`
MessageType string `json:"message_type"` // "statistics"
SourceSnapshot string `json:"source_snapshot"`
TargetSnapshot string `json:"target_snapshot"`
ChangedFiles int `json:"changed_files"`
Added DiffStat `json:"added"`
Removed DiffStat `json:"removed"`
BlobsBefore, BlobsAfter, BlobsCommon restic.AssociatedBlobSet `json:"-"`
}
// updateBlobs updates the blob counters in the stats struct.
func updateBlobs(repo restic.Loader, blobs restic.BlobSet, stats *DiffStat) {
for h := range blobs {
func updateBlobs(repo restic.Loader, blobs restic.AssociatedBlobSet, stats *DiffStat, printError func(string, ...interface{})) {
for h := range blobs.Keys() {
switch h.Type {
case restic.DataBlob:
stats.DataBlobs++
@@ -166,7 +168,7 @@ func updateBlobs(repo restic.Loader, blobs restic.BlobSet, stats *DiffStat) {
size, found := repo.LookupBlobSize(h.Type, h.ID)
if !found {
Warnf("unable to find blob size for %v\n", h)
printError("unable to find blob size for %v", h)
continue
}
@@ -174,30 +176,33 @@ func updateBlobs(repo restic.Loader, blobs restic.BlobSet, stats *DiffStat) {
}
}
func (c *Comparer) printDir(ctx context.Context, mode string, stats *DiffStat, blobs restic.BlobSet, prefix string, id restic.ID) error {
func (c *Comparer) printDir(ctx context.Context, mode string, stats *DiffStat, blobs restic.AssociatedBlobSet, prefix string, id restic.ID) error {
debug.Log("print %v tree %v", mode, id)
tree, err := restic.LoadTree(ctx, c.repo, id)
tree, err := data.LoadTree(ctx, c.repo, id)
if err != nil {
return err
}
for _, node := range tree.Nodes {
for item := range tree {
if item.Error != nil {
return item.Error
}
if ctx.Err() != nil {
return ctx.Err()
}
node := item.Node
name := path.Join(prefix, node.Name)
if node.Type == restic.NodeTypeDir {
if node.Type == data.NodeTypeDir {
name += "/"
}
c.printChange(NewChange(name, mode))
stats.Add(node)
addBlobs(blobs, node)
if node.Type == restic.NodeTypeDir {
if node.Type == data.NodeTypeDir {
err := c.printDir(ctx, mode, stats, blobs, name, *node.Subtree)
if err != nil && err != context.Canceled {
Warnf("error: %v\n", err)
c.printError("error: %v", err)
}
}
}
@@ -205,24 +210,28 @@ func (c *Comparer) printDir(ctx context.Context, mode string, stats *DiffStat, b
return ctx.Err()
}
func (c *Comparer) collectDir(ctx context.Context, blobs restic.BlobSet, id restic.ID) error {
func (c *Comparer) collectDir(ctx context.Context, blobs restic.AssociatedBlobSet, id restic.ID) error {
debug.Log("print tree %v", id)
tree, err := restic.LoadTree(ctx, c.repo, id)
tree, err := data.LoadTree(ctx, c.repo, id)
if err != nil {
return err
}
for _, node := range tree.Nodes {
for item := range tree {
if item.Error != nil {
return item.Error
}
if ctx.Err() != nil {
return ctx.Err()
}
node := item.Node
addBlobs(blobs, node)
if node.Type == restic.NodeTypeDir {
if node.Type == data.NodeTypeDir {
err := c.collectDir(ctx, blobs, *node.Subtree)
if err != nil && err != context.Canceled {
Warnf("error: %v\n", err)
c.printError("error: %v", err)
}
}
}
@@ -230,56 +239,41 @@ func (c *Comparer) collectDir(ctx context.Context, blobs restic.BlobSet, id rest
return ctx.Err()
}
func uniqueNodeNames(tree1, tree2 *restic.Tree) (tree1Nodes, tree2Nodes map[string]*restic.Node, uniqueNames []string) {
names := make(map[string]struct{})
tree1Nodes = make(map[string]*restic.Node)
for _, node := range tree1.Nodes {
tree1Nodes[node.Name] = node
names[node.Name] = struct{}{}
}
tree2Nodes = make(map[string]*restic.Node)
for _, node := range tree2.Nodes {
tree2Nodes[node.Name] = node
names[node.Name] = struct{}{}
}
uniqueNames = make([]string, 0, len(names))
for name := range names {
uniqueNames = append(uniqueNames, name)
}
sort.Strings(uniqueNames)
return tree1Nodes, tree2Nodes, uniqueNames
}
func (c *Comparer) diffTree(ctx context.Context, stats *DiffStatsContainer, prefix string, id1, id2 restic.ID) error {
debug.Log("diffing %v to %v", id1, id2)
tree1, err := restic.LoadTree(ctx, c.repo, id1)
tree1, err := data.LoadTree(ctx, c.repo, id1)
if err != nil {
return err
}
tree2, err := restic.LoadTree(ctx, c.repo, id2)
tree2, err := data.LoadTree(ctx, c.repo, id2)
if err != nil {
return err
}
tree1Nodes, tree2Nodes, names := uniqueNodeNames(tree1, tree2)
for _, name := range names {
for dt := range data.DualTreeIterator(tree1, tree2) {
if dt.Error != nil {
return dt.Error
}
if ctx.Err() != nil {
return ctx.Err()
}
node1, t1 := tree1Nodes[name]
node2, t2 := tree2Nodes[name]
node1 := dt.Tree1
node2 := dt.Tree2
var name string
if node1 != nil {
name = node1.Name
} else {
name = node2.Name
}
addBlobs(stats.BlobsBefore, node1)
addBlobs(stats.BlobsAfter, node2)
switch {
case t1 && t2:
case node1 != nil && node2 != nil:
name := path.Join(prefix, name)
mod := ""
@@ -287,12 +281,12 @@ func (c *Comparer) diffTree(ctx context.Context, stats *DiffStatsContainer, pref
mod += "T"
}
if node2.Type == restic.NodeTypeDir {
if node2.Type == data.NodeTypeDir {
name += "/"
}
if node1.Type == restic.NodeTypeFile &&
node2.Type == restic.NodeTypeFile &&
if node1.Type == data.NodeTypeFile &&
node2.Type == data.NodeTypeFile &&
!reflect.DeepEqual(node1.Content, node2.Content) {
mod += "M"
stats.ChangedFiles++
@@ -314,7 +308,7 @@ func (c *Comparer) diffTree(ctx context.Context, stats *DiffStatsContainer, pref
c.printChange(NewChange(name, mod))
}
if node1.Type == restic.NodeTypeDir && node2.Type == restic.NodeTypeDir {
if node1.Type == data.NodeTypeDir && node2.Type == data.NodeTypeDir {
var err error
if (*node1.Subtree).Equal(*node2.Subtree) {
err = c.collectDir(ctx, stats.BlobsCommon, *node1.Subtree)
@@ -322,35 +316,35 @@ func (c *Comparer) diffTree(ctx context.Context, stats *DiffStatsContainer, pref
err = c.diffTree(ctx, stats, name, *node1.Subtree, *node2.Subtree)
}
if err != nil && err != context.Canceled {
Warnf("error: %v\n", err)
c.printError("error: %v", err)
}
}
case t1 && !t2:
case node1 != nil && node2 == nil:
prefix := path.Join(prefix, name)
if node1.Type == restic.NodeTypeDir {
if node1.Type == data.NodeTypeDir {
prefix += "/"
}
c.printChange(NewChange(prefix, "-"))
stats.Removed.Add(node1)
if node1.Type == restic.NodeTypeDir {
if node1.Type == data.NodeTypeDir {
err := c.printDir(ctx, "-", &stats.Removed, stats.BlobsBefore, prefix, *node1.Subtree)
if err != nil && err != context.Canceled {
Warnf("error: %v\n", err)
c.printError("error: %v", err)
}
}
case !t1 && t2:
case node1 == nil && node2 != nil:
prefix := path.Join(prefix, name)
if node2.Type == restic.NodeTypeDir {
if node2.Type == data.NodeTypeDir {
prefix += "/"
}
c.printChange(NewChange(prefix, "+"))
stats.Added.Add(node2)
if node2.Type == restic.NodeTypeDir {
if node2.Type == data.NodeTypeDir {
err := c.printDir(ctx, "+", &stats.Added, stats.BlobsAfter, prefix, *node2.Subtree)
if err != nil && err != context.Canceled {
Warnf("error: %v\n", err)
c.printError("error: %v", err)
}
}
}
@@ -359,12 +353,14 @@ func (c *Comparer) diffTree(ctx context.Context, stats *DiffStatsContainer, pref
return ctx.Err()
}
func runDiff(ctx context.Context, opts DiffOptions, gopts GlobalOptions, args []string) error {
func runDiff(ctx context.Context, opts DiffOptions, gopts global.Options, args []string, term ui.Terminal) error {
if len(args) != 2 {
return errors.Fatalf("specify two snapshot IDs")
}
ctx, repo, unlock, err := openWithReadLock(ctx, gopts, gopts.NoLock)
printer := ui.NewProgressPrinter(gopts.JSON, gopts.Verbosity, term)
ctx, repo, unlock, err := openWithReadLock(ctx, gopts, gopts.NoLock, printer)
if err != nil {
return err
}
@@ -386,10 +382,9 @@ func runDiff(ctx context.Context, opts DiffOptions, gopts GlobalOptions, args []
}
if !gopts.JSON {
Verbosef("comparing snapshot %v to %v:\n\n", sn1.ID().Str(), sn2.ID().Str())
printer.P("comparing snapshot %v to %v:\n\n", sn1.ID().Str(), sn2.ID().Str())
}
bar := newIndexProgress(gopts.Quiet, gopts.JSON)
if err = repo.LoadIndex(ctx, bar); err != nil {
if err = repo.LoadIndex(ctx, printer); err != nil {
return err
}
@@ -401,30 +396,31 @@ func runDiff(ctx context.Context, opts DiffOptions, gopts GlobalOptions, args []
return errors.Errorf("snapshot %v has nil tree", sn2.ID().Str())
}
sn1.Tree, err = restic.FindTreeDirectory(ctx, repo, sn1.Tree, subfolder1)
sn1.Tree, err = data.FindTreeDirectory(ctx, repo, sn1.Tree, subfolder1)
if err != nil {
return err
}
sn2.Tree, err = restic.FindTreeDirectory(ctx, repo, sn2.Tree, subfolder2)
sn2.Tree, err = data.FindTreeDirectory(ctx, repo, sn2.Tree, subfolder2)
if err != nil {
return err
}
c := &Comparer{
repo: repo,
opts: opts,
repo: repo,
opts: opts,
printError: printer.E,
printChange: func(change *Change) {
Printf("%-5s%v\n", change.Modifier, change.Path)
printer.S("%-5s%v", change.Modifier, change.Path)
},
}
if gopts.JSON {
enc := json.NewEncoder(globalOptions.stdout)
enc := json.NewEncoder(gopts.Term.OutputWriter())
c.printChange = func(change *Change) {
err := enc.Encode(change)
if err != nil {
Warnf("JSON encode failed: %v\n", err)
printer.E("JSON encode failed: %v", err)
}
}
}
@@ -437,9 +433,9 @@ func runDiff(ctx context.Context, opts DiffOptions, gopts GlobalOptions, args []
MessageType: "statistics",
SourceSnapshot: args[0],
TargetSnapshot: args[1],
BlobsBefore: restic.NewBlobSet(),
BlobsAfter: restic.NewBlobSet(),
BlobsCommon: restic.NewBlobSet(),
BlobsBefore: repo.NewAssociatedBlobSet(),
BlobsAfter: repo.NewAssociatedBlobSet(),
BlobsCommon: repo.NewAssociatedBlobSet(),
}
stats.BlobsBefore.Insert(restic.BlobHandle{Type: restic.TreeBlob, ID: *sn1.Tree})
stats.BlobsAfter.Insert(restic.BlobHandle{Type: restic.TreeBlob, ID: *sn2.Tree})
@@ -450,23 +446,23 @@ func runDiff(ctx context.Context, opts DiffOptions, gopts GlobalOptions, args []
}
both := stats.BlobsBefore.Intersect(stats.BlobsAfter)
updateBlobs(repo, stats.BlobsBefore.Sub(both).Sub(stats.BlobsCommon), &stats.Removed)
updateBlobs(repo, stats.BlobsAfter.Sub(both).Sub(stats.BlobsCommon), &stats.Added)
updateBlobs(repo, stats.BlobsBefore.Sub(both).Sub(stats.BlobsCommon), &stats.Removed, printer.E)
updateBlobs(repo, stats.BlobsAfter.Sub(both).Sub(stats.BlobsCommon), &stats.Added, printer.E)
if gopts.JSON {
err := json.NewEncoder(globalOptions.stdout).Encode(stats)
err := json.NewEncoder(gopts.Term.OutputWriter()).Encode(stats)
if err != nil {
Warnf("JSON encode failed: %v\n", err)
printer.E("JSON encode failed: %v", err)
}
} else {
Printf("\n")
Printf("Files: %5d new, %5d removed, %5d changed\n", stats.Added.Files, stats.Removed.Files, stats.ChangedFiles)
Printf("Dirs: %5d new, %5d removed\n", stats.Added.Dirs, stats.Removed.Dirs)
Printf("Others: %5d new, %5d removed\n", stats.Added.Others, stats.Removed.Others)
Printf("Data Blobs: %5d new, %5d removed\n", stats.Added.DataBlobs, stats.Removed.DataBlobs)
Printf("Tree Blobs: %5d new, %5d removed\n", stats.Added.TreeBlobs, stats.Removed.TreeBlobs)
Printf(" Added: %-5s\n", ui.FormatBytes(stats.Added.Bytes))
Printf(" Removed: %-5s\n", ui.FormatBytes(stats.Removed.Bytes))
printer.S("")
printer.S("Files: %5d new, %5d removed, %5d changed", stats.Added.Files, stats.Removed.Files, stats.ChangedFiles)
printer.S("Dirs: %5d new, %5d removed", stats.Added.Dirs, stats.Removed.Dirs)
printer.S("Others: %5d new, %5d removed", stats.Added.Others, stats.Removed.Others)
printer.S("Data Blobs: %5d new, %5d removed", stats.Added.DataBlobs, stats.Removed.DataBlobs)
printer.S("Tree Blobs: %5d new, %5d removed", stats.Added.TreeBlobs, stats.Removed.TreeBlobs)
printer.S(" Added: %-5s", ui.FormatBytes(stats.Added.Bytes))
printer.S(" Removed: %-5s", ui.FormatBytes(stats.Removed.Bytes))
}
return nil

View File

@@ -11,15 +11,16 @@ import (
"strings"
"testing"
"github.com/restic/restic/internal/global"
rtest "github.com/restic/restic/internal/test"
)
func testRunDiffOutput(gopts GlobalOptions, firstSnapshotID string, secondSnapshotID string) (string, error) {
buf, err := withCaptureStdout(func() error {
func testRunDiffOutput(t testing.TB, gopts global.Options, firstSnapshotID string, secondSnapshotID string) (string, error) {
buf, err := withCaptureStdout(t, gopts, func(ctx context.Context, gopts global.Options) error {
opts := DiffOptions{
ShowMetadata: false,
}
return runDiff(context.TODO(), opts, gopts, []string{firstSnapshotID, secondSnapshotID})
return runDiff(ctx, opts, gopts, []string{firstSnapshotID, secondSnapshotID}, gopts.Term)
})
return buf.String(), err
}
@@ -123,10 +124,10 @@ func TestDiff(t *testing.T) {
// quiet suppresses the diff output except for the summary
env.gopts.Quiet = false
_, err := testRunDiffOutput(env.gopts, "", secondSnapshotID)
_, err := testRunDiffOutput(t, env.gopts, "", secondSnapshotID)
rtest.Assert(t, err != nil, "expected error on invalid snapshot id")
out, err := testRunDiffOutput(env.gopts, firstSnapshotID, secondSnapshotID)
out, err := testRunDiffOutput(t, env.gopts, firstSnapshotID, secondSnapshotID)
rtest.OK(t, err)
for _, pattern := range diffOutputRegexPatterns {
@@ -137,7 +138,7 @@ func TestDiff(t *testing.T) {
// check quiet output
env.gopts.Quiet = true
outQuiet, err := testRunDiffOutput(env.gopts, firstSnapshotID, secondSnapshotID)
outQuiet, err := testRunDiffOutput(t, env.gopts, firstSnapshotID, secondSnapshotID)
rtest.OK(t, err)
rtest.Assert(t, len(outQuiet) < len(out), "expected shorter output on quiet mode %v vs. %v", len(outQuiet), len(out))
@@ -154,7 +155,7 @@ func TestDiffJSON(t *testing.T) {
// quiet suppresses the diff output except for the summary
env.gopts.Quiet = false
env.gopts.JSON = true
out, err := testRunDiffOutput(env.gopts, firstSnapshotID, secondSnapshotID)
out, err := testRunDiffOutput(t, env.gopts, firstSnapshotID, secondSnapshotID)
rtest.OK(t, err)
var stat DiffStatsContainer
@@ -181,7 +182,7 @@ func TestDiffJSON(t *testing.T) {
// check quiet output
env.gopts.Quiet = true
outQuiet, err := testRunDiffOutput(env.gopts, firstSnapshotID, secondSnapshotID)
outQuiet, err := testRunDiffOutput(t, env.gopts, firstSnapshotID, secondSnapshotID)
rtest.OK(t, err)
stat = DiffStatsContainer{}

View File

@@ -7,16 +7,19 @@ import (
"path"
"path/filepath"
"github.com/restic/restic/internal/data"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/dump"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/ui"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
func newDumpCommand() *cobra.Command {
func newDumpCommand(globalOptions *global.Options) *cobra.Command {
var opts DumpOptions
cmd := &cobra.Command{
Use: "dump [flags] snapshotID file",
@@ -46,7 +49,8 @@ Exit status is 12 if the password is incorrect.
GroupID: cmdGroupDefault,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runDump(cmd.Context(), opts, globalOptions, args)
finalizeSnapshotFilter(&opts.SnapshotFilter)
return runDump(cmd.Context(), opts, *globalOptions, args, globalOptions.Term)
},
}
@@ -56,7 +60,7 @@ Exit status is 12 if the password is incorrect.
// DumpOptions collects all options for the dump command.
type DumpOptions struct {
restic.SnapshotFilter
data.SnapshotFilter
Archive string
Target string
}
@@ -76,7 +80,7 @@ func splitPath(p string) []string {
return append(s, f)
}
func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.BlobLoader, prefix string, pathComponents []string, d *dump.Dumper, canWriteArchiveFunc func() error) error {
func printFromTree(ctx context.Context, tree data.TreeNodeIterator, repo restic.BlobLoader, prefix string, pathComponents []string, d *dump.Dumper, canWriteArchiveFunc func() error) error {
// If we print / we need to assume that there are multiple nodes at that
// level in the tree.
if pathComponents[0] == "" {
@@ -88,35 +92,38 @@ func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.BlobLoade
item := filepath.Join(prefix, pathComponents[0])
l := len(pathComponents)
for _, node := range tree.Nodes {
for it := range tree {
if it.Error != nil {
return it.Error
}
if ctx.Err() != nil {
return ctx.Err()
}
node := it.Node
// If dumping something in the highest level it will just take the
// first item it finds and dump that according to the switch case below.
if node.Name == pathComponents[0] {
switch {
case l == 1 && node.Type == restic.NodeTypeFile:
case l == 1 && node.Type == data.NodeTypeFile:
return d.WriteNode(ctx, node)
case l > 1 && node.Type == restic.NodeTypeDir:
subtree, err := restic.LoadTree(ctx, repo, *node.Subtree)
case l > 1 && node.Type == data.NodeTypeDir:
subtree, err := data.LoadTree(ctx, repo, *node.Subtree)
if err != nil {
return errors.Wrapf(err, "cannot load subtree for %q", item)
}
return printFromTree(ctx, subtree, repo, item, pathComponents[1:], d, canWriteArchiveFunc)
case node.Type == restic.NodeTypeDir:
case node.Type == data.NodeTypeDir:
if err := canWriteArchiveFunc(); err != nil {
return err
}
subtree, err := restic.LoadTree(ctx, repo, *node.Subtree)
subtree, err := data.LoadTree(ctx, repo, *node.Subtree)
if err != nil {
return err
}
return d.DumpTree(ctx, subtree, item)
case l > 1:
return fmt.Errorf("%q should be a dir, but is a %q", item, node.Type)
case node.Type != restic.NodeTypeFile:
case node.Type != data.NodeTypeFile:
return fmt.Errorf("%q should be a file, but is a %q", item, node.Type)
}
}
@@ -124,11 +131,13 @@ func printFromTree(ctx context.Context, tree *restic.Tree, repo restic.BlobLoade
return fmt.Errorf("path %q not found in snapshot", item)
}
func runDump(ctx context.Context, opts DumpOptions, gopts GlobalOptions, args []string) error {
func runDump(ctx context.Context, opts DumpOptions, gopts global.Options, args []string, term ui.Terminal) error {
if len(args) != 2 {
return errors.Fatal("no file and no snapshot ID specified")
}
printer := ui.NewProgressPrinter(gopts.JSON, gopts.Verbosity, term)
switch opts.Archive {
case "tar", "zip":
default:
@@ -142,39 +151,34 @@ func runDump(ctx context.Context, opts DumpOptions, gopts GlobalOptions, args []
splittedPath := splitPath(path.Clean(pathToPrint))
ctx, repo, unlock, err := openWithReadLock(ctx, gopts, gopts.NoLock)
ctx, repo, unlock, err := openWithReadLock(ctx, gopts, gopts.NoLock, printer)
if err != nil {
return err
}
defer unlock()
sn, subfolder, err := (&restic.SnapshotFilter{
Hosts: opts.Hosts,
Paths: opts.Paths,
Tags: opts.Tags,
}).FindLatest(ctx, repo, repo, snapshotIDString)
sn, subfolder, err := opts.SnapshotFilter.FindLatest(ctx, repo, repo, snapshotIDString)
if err != nil {
return errors.Fatalf("failed to find snapshot: %v", err)
}
bar := newIndexProgress(gopts.Quiet, gopts.JSON)
err = repo.LoadIndex(ctx, bar)
err = repo.LoadIndex(ctx, printer)
if err != nil {
return err
}
sn.Tree, err = restic.FindTreeDirectory(ctx, repo, sn.Tree, subfolder)
sn.Tree, err = data.FindTreeDirectory(ctx, repo, sn.Tree, subfolder)
if err != nil {
return err
}
tree, err := restic.LoadTree(ctx, repo, *sn.Tree)
tree, err := data.LoadTree(ctx, repo, *sn.Tree)
if err != nil {
return errors.Fatalf("loading tree for snapshot %q failed: %v", snapshotIDString, err)
}
outputFileWriter := os.Stdout
canWriteArchiveFunc := checkStdoutArchive
outputFileWriter := term.OutputRaw()
canWriteArchiveFunc := checkStdoutArchive(term)
if opts.Target != "" {
file, err := os.Create(opts.Target)
@@ -198,9 +202,9 @@ func runDump(ctx context.Context, opts DumpOptions, gopts GlobalOptions, args []
return nil
}
func checkStdoutArchive() error {
if stdoutIsTerminal() {
return fmt.Errorf("stdout is the terminal, please redirect output")
func checkStdoutArchive(term ui.Terminal) func() error {
if term.OutputIsTerminal() {
return func() error { return fmt.Errorf("stdout is the terminal, please redirect output") }
}
return nil
return func() error { return nil }
}

View File

@@ -1,16 +1,15 @@
package main
import (
"fmt"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/feature"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/ui/table"
"github.com/spf13/cobra"
)
func newFeaturesCommand() *cobra.Command {
func newFeaturesCommand(globalOptions *global.Options) *cobra.Command {
cmd := &cobra.Command{
Use: "features",
Short: "Print list of feature flags",
@@ -39,7 +38,7 @@ Exit status is 1 if there was any error.
return errors.Fatal("the feature command expects no arguments")
}
fmt.Printf("All Feature Flags:\n")
globalOptions.Term.Print("All Feature Flags:\n")
flags := feature.Flag.List()
tab := table.New()
@@ -51,7 +50,7 @@ Exit status is 1 if there was any error.
for _, flag := range flags {
tab.AddRow(flag)
}
return tab.Write(globalOptions.stdout)
return tab.Write(globalOptions.Term.OutputWriter())
},
}

View File

@@ -3,6 +3,8 @@ package main
import (
"context"
"encoding/json"
"fmt"
"io"
"sort"
"strings"
"time"
@@ -10,14 +12,17 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/pflag"
"github.com/restic/restic/internal/data"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/filter"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/walker"
)
func newFindCommand() *cobra.Command {
func newFindCommand(globalOptions *global.Options) *cobra.Command {
var opts FindOptions
cmd := &cobra.Command{
@@ -48,7 +53,8 @@ Exit status is 12 if the password is incorrect.
GroupID: cmdGroupDefault,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runFind(cmd.Context(), opts, globalOptions, args)
finalizeSnapshotFilter(&opts.SnapshotFilter)
return runFind(cmd.Context(), opts, *globalOptions, args, globalOptions.Term)
},
}
@@ -67,7 +73,7 @@ type FindOptions struct {
ListLong bool
HumanReadable bool
Reverse bool
restic.SnapshotFilter
data.SnapshotFilter
}
func (opts *FindOptions) AddFlags(f *pflag.FlagSet) {
@@ -121,13 +127,19 @@ type statefulOutput struct {
HumanReadable bool
JSON bool
inuse bool
newsn *restic.Snapshot
oldsn *restic.Snapshot
newsn *data.Snapshot
oldsn *data.Snapshot
hits int
printer interface {
S(string, ...interface{})
P(string, ...interface{})
E(string, ...interface{})
}
stdout io.Writer
}
func (s *statefulOutput) PrintPatternJSON(path string, node *restic.Node) {
type findNode restic.Node
func (s *statefulOutput) PrintPatternJSON(path string, node *data.Node) {
type findNode data.Node
b, err := json.Marshal(struct {
// Add these attributes
Path string `json:"path,omitempty"`
@@ -148,40 +160,40 @@ func (s *statefulOutput) PrintPatternJSON(path string, node *restic.Node) {
findNode: (*findNode)(node),
})
if err != nil {
Warnf("Marshall failed: %v\n", err)
s.printer.E("Marshall failed: %v", err)
return
}
if !s.inuse {
Printf("[")
_, _ = s.stdout.Write([]byte("["))
s.inuse = true
}
if s.newsn != s.oldsn {
if s.oldsn != nil {
Printf("],\"hits\":%d,\"snapshot\":%q},", s.hits, s.oldsn.ID())
_, _ = fmt.Fprintf(s.stdout, "],\"hits\":%d,\"snapshot\":%q},", s.hits, s.oldsn.ID())
}
Printf(`{"matches":[`)
_, _ = s.stdout.Write([]byte(`{"matches":[`))
s.oldsn = s.newsn
s.hits = 0
}
if s.hits > 0 {
Printf(",")
_, _ = s.stdout.Write([]byte(","))
}
Print(string(b))
_, _ = s.stdout.Write(b)
s.hits++
}
func (s *statefulOutput) PrintPatternNormal(path string, node *restic.Node) {
func (s *statefulOutput) PrintPatternNormal(path string, node *data.Node) {
if s.newsn != s.oldsn {
if s.oldsn != nil {
Verbosef("\n")
s.printer.P("")
}
s.oldsn = s.newsn
Verbosef("Found matching entries in snapshot %s from %s\n", s.oldsn.ID().Str(), s.oldsn.Time.Local().Format(TimeFormat))
s.printer.P("Found matching entries in snapshot %s from %s", s.oldsn.ID().Str(), s.oldsn.Time.Local().Format(global.TimeFormat))
}
Println(formatNode(path, node, s.ListLong, s.HumanReadable))
s.printer.S(formatNode(path, node, s.ListLong, s.HumanReadable))
}
func (s *statefulOutput) PrintPattern(path string, node *restic.Node) {
func (s *statefulOutput) PrintPattern(path string, node *data.Node) {
if s.JSON {
s.PrintPatternJSON(path, node)
} else {
@@ -189,7 +201,7 @@ func (s *statefulOutput) PrintPattern(path string, node *restic.Node) {
}
}
func (s *statefulOutput) PrintObjectJSON(kind, id, nodepath, treeID string, sn *restic.Snapshot) {
func (s *statefulOutput) PrintObjectJSON(kind, id, nodepath, treeID string, sn *data.Snapshot) {
b, err := json.Marshal(struct {
// Add these attributes
ObjectType string `json:"object_type"`
@@ -207,32 +219,32 @@ func (s *statefulOutput) PrintObjectJSON(kind, id, nodepath, treeID string, sn *
Time: sn.Time,
})
if err != nil {
Warnf("Marshall failed: %v\n", err)
s.printer.E("Marshall failed: %v", err)
return
}
if !s.inuse {
Printf("[")
_, _ = s.stdout.Write([]byte("["))
s.inuse = true
}
if s.hits > 0 {
Printf(",")
_, _ = s.stdout.Write([]byte(","))
}
Print(string(b))
_, _ = s.stdout.Write(b)
s.hits++
}
func (s *statefulOutput) PrintObjectNormal(kind, id, nodepath, treeID string, sn *restic.Snapshot) {
Printf("Found %s %s\n", kind, id)
func (s *statefulOutput) PrintObjectNormal(kind, id, nodepath, treeID string, sn *data.Snapshot) {
s.printer.S("Found %s %s", kind, id)
if kind == "blob" {
Printf(" ... in file %s\n", nodepath)
Printf(" (tree %s)\n", treeID)
s.printer.S(" ... in file %s", nodepath)
s.printer.S(" (tree %s)", treeID)
} else {
Printf(" ... path %s\n", nodepath)
s.printer.S(" ... path %s", nodepath)
}
Printf(" ... in snapshot %s (%s)\n", sn.ID().Str(), sn.Time.Local().Format(TimeFormat))
s.printer.S(" ... in snapshot %s (%s)", sn.ID().Str(), sn.Time.Local().Format(global.TimeFormat))
}
func (s *statefulOutput) PrintObject(kind, id, nodepath, treeID string, sn *restic.Snapshot) {
func (s *statefulOutput) PrintObject(kind, id, nodepath, treeID string, sn *data.Snapshot) {
if s.JSON {
s.PrintObjectJSON(kind, id, nodepath, treeID, sn)
} else {
@@ -244,12 +256,12 @@ func (s *statefulOutput) Finish() {
if s.JSON {
// do some finishing up
if s.oldsn != nil {
Printf("],\"hits\":%d,\"snapshot\":%q}", s.hits, s.oldsn.ID())
_, _ = fmt.Fprintf(s.stdout, "],\"hits\":%d,\"snapshot\":%q}", s.hits, s.oldsn.ID())
}
if s.inuse {
Printf("]\n")
_, _ = s.stdout.Write([]byte("]\n"))
} else {
Printf("[]\n")
_, _ = s.stdout.Write([]byte("[]\n"))
}
return
}
@@ -263,9 +275,14 @@ type Finder struct {
blobIDs map[string]struct{}
treeIDs map[string]struct{}
itemsFound int
printer interface {
S(string, ...interface{})
P(string, ...interface{})
E(string, ...interface{})
}
}
func (f *Finder) findInSnapshot(ctx context.Context, sn *restic.Snapshot) error {
func (f *Finder) findInSnapshot(ctx context.Context, sn *data.Snapshot) error {
debug.Log("searching in snapshot %s\n for entries within [%s %s]", sn.ID(), f.pat.oldest, f.pat.newest)
if sn.Tree == nil {
@@ -273,11 +290,12 @@ func (f *Finder) findInSnapshot(ctx context.Context, sn *restic.Snapshot) error
}
f.out.newsn = sn
return walker.Walk(ctx, f.repo, *sn.Tree, walker.WalkVisitor{ProcessNode: func(parentTreeID restic.ID, nodepath string, node *restic.Node, err error) error {
return walker.Walk(ctx, f.repo, *sn.Tree, walker.WalkVisitor{ProcessNode: func(parentTreeID restic.ID, nodepath string, node *data.Node, err error) error {
if err != nil {
debug.Log("Error loading tree %v: %v", parentTreeID, err)
Printf("Unable to load tree %s\n ... which belongs to snapshot %s\n", parentTreeID, sn.ID())
f.printer.S("Unable to load tree %s", parentTreeID)
f.printer.S(" ... which belongs to snapshot %s", sn.ID())
return walker.ErrSkipNode
}
@@ -305,7 +323,7 @@ func (f *Finder) findInSnapshot(ctx context.Context, sn *restic.Snapshot) error
}
var errIfNoMatch error
if node.Type == restic.NodeTypeDir {
if node.Type == data.NodeTypeDir {
var childMayMatch bool
for _, pat := range f.pat.pattern {
mayMatch, err := filter.ChildMatch(pat, normalizedNodepath)
@@ -363,7 +381,7 @@ func (f *Finder) findTree(treeID restic.ID, nodepath string) error {
return nil
}
func (f *Finder) findIDs(ctx context.Context, sn *restic.Snapshot) error {
func (f *Finder) findIDs(ctx context.Context, sn *data.Snapshot) error {
debug.Log("searching IDs in snapshot %s", sn.ID())
if sn.Tree == nil {
@@ -371,11 +389,12 @@ func (f *Finder) findIDs(ctx context.Context, sn *restic.Snapshot) error {
}
f.out.newsn = sn
return walker.Walk(ctx, f.repo, *sn.Tree, walker.WalkVisitor{ProcessNode: func(parentTreeID restic.ID, nodepath string, node *restic.Node, err error) error {
return walker.Walk(ctx, f.repo, *sn.Tree, walker.WalkVisitor{ProcessNode: func(parentTreeID restic.ID, nodepath string, node *data.Node, err error) error {
if err != nil {
debug.Log("Error loading tree %v: %v", parentTreeID, err)
Printf("Unable to load tree %s\n ... which belongs to snapshot %s\n", parentTreeID, sn.ID())
f.printer.S("Unable to load tree %s", parentTreeID)
f.printer.S(" ... which belongs to snapshot %s", sn.ID())
return walker.ErrSkipNode
}
@@ -395,7 +414,7 @@ func (f *Finder) findIDs(ctx context.Context, sn *restic.Snapshot) error {
}
}
if node.Type == restic.NodeTypeFile && f.blobIDs != nil {
if node.Type == data.NodeTypeFile && f.blobIDs != nil {
for _, id := range node.Content {
if ctx.Err() != nil {
return ctx.Err()
@@ -524,7 +543,7 @@ func (f *Finder) indexPacksToBlobs(ctx context.Context, packIDs map[string]struc
for h := range indexPackIDs {
list = append(list, h)
}
Warnf("some pack files are missing from the repository, getting their blobs from the repository index: %v\n\n", list)
f.printer.E("some pack files are missing from the repository, getting their blobs from the repository index: %v\n\n", list)
}
return packIDs, nil
}
@@ -532,19 +551,20 @@ func (f *Finder) indexPacksToBlobs(ctx context.Context, packIDs map[string]struc
func (f *Finder) findObjectPack(id string, t restic.BlobType) {
rid, err := restic.ParseID(id)
if err != nil {
Printf("Note: cannot find pack for object '%s', unable to parse ID: %v\n", id, err)
f.printer.S("Note: cannot find pack for object '%s', unable to parse ID: %v", id, err)
return
}
blobs := f.repo.LookupBlob(t, rid)
if len(blobs) == 0 {
Printf("Object %s not found in the index\n", rid.Str())
f.printer.S("Object %s not found in the index", rid.Str())
return
}
for _, b := range blobs {
if b.ID.Equal(rid) {
Printf("Object belongs to pack %s\n ... Pack %s: %s\n", b.PackID, b.PackID.Str(), b.String())
f.printer.S("Object belongs to pack %s", b.PackID)
f.printer.S(" ... Pack %s: %s", b.PackID.Str(), b.String())
break
}
}
@@ -560,11 +580,13 @@ func (f *Finder) findObjectsPacks() {
}
}
func runFind(ctx context.Context, opts FindOptions, gopts GlobalOptions, args []string) error {
func runFind(ctx context.Context, opts FindOptions, gopts global.Options, args []string, term ui.Terminal) error {
if len(args) == 0 {
return errors.Fatal("wrong number of arguments")
}
printer := ui.NewProgressPrinter(gopts.JSON, gopts.Verbosity, term)
var err error
pat := findPattern{pattern: args}
if opts.CaseInsensitive {
@@ -586,6 +608,10 @@ func runFind(ctx context.Context, opts FindOptions, gopts GlobalOptions, args []
}
}
if !pat.newest.IsZero() && !pat.oldest.IsZero() && pat.oldest.After(pat.newest) {
return errors.Fatal("--oldest must specify a time before --newest")
}
// Check at most only one kind of IDs is provided: currently we
// can't mix types
if (opts.BlobID && opts.TreeID) ||
@@ -594,7 +620,7 @@ func runFind(ctx context.Context, opts FindOptions, gopts GlobalOptions, args []
return errors.Fatal("cannot have several ID types")
}
ctx, repo, unlock, err := openWithReadLock(ctx, gopts, gopts.NoLock)
ctx, repo, unlock, err := openWithReadLock(ctx, gopts, gopts.NoLock, printer)
if err != nil {
return err
}
@@ -604,15 +630,15 @@ func runFind(ctx context.Context, opts FindOptions, gopts GlobalOptions, args []
if err != nil {
return err
}
bar := newIndexProgress(gopts.Quiet, gopts.JSON)
if err = repo.LoadIndex(ctx, bar); err != nil {
if err = repo.LoadIndex(ctx, printer); err != nil {
return err
}
f := &Finder{
repo: repo,
pat: pat,
out: statefulOutput{ListLong: opts.ListLong, HumanReadable: opts.HumanReadable, JSON: gopts.JSON},
repo: repo,
pat: pat,
out: statefulOutput{ListLong: opts.ListLong, HumanReadable: opts.HumanReadable, JSON: gopts.JSON, printer: printer, stdout: term.OutputRaw()},
printer: printer,
}
if opts.BlobID {
@@ -635,8 +661,8 @@ func runFind(ctx context.Context, opts FindOptions, gopts GlobalOptions, args []
}
}
var filteredSnapshots []*restic.Snapshot
for sn := range FindFilteredSnapshots(ctx, snapshotLister, repo, &opts.SnapshotFilter, opts.Snapshots) {
var filteredSnapshots []*data.Snapshot
for sn := range FindFilteredSnapshots(ctx, snapshotLister, repo, &opts.SnapshotFilter, opts.Snapshots, printer) {
filteredSnapshots = append(filteredSnapshots, sn)
}
if ctx.Err() != nil {

View File

@@ -7,14 +7,15 @@ import (
"testing"
"time"
"github.com/restic/restic/internal/global"
rtest "github.com/restic/restic/internal/test"
)
func testRunFind(t testing.TB, wantJSON bool, opts FindOptions, gopts GlobalOptions, pattern string) []byte {
buf, err := withCaptureStdout(func() error {
func testRunFind(t testing.TB, wantJSON bool, opts FindOptions, gopts global.Options, pattern string) []byte {
buf, err := withCaptureStdout(t, gopts, func(ctx context.Context, gopts global.Options) error {
gopts.JSON = wantJSON
return runFind(context.TODO(), opts, gopts, []string{pattern})
return runFind(ctx, opts, gopts, []string{pattern}, gopts.Term)
})
rtest.OK(t, err)
return buf.Bytes()
@@ -95,7 +96,7 @@ func TestFindSorting(t *testing.T) {
env, cleanup := withTestEnvironment(t)
defer cleanup()
datafile := testSetupBackupData(t, env)
testSetupBackupData(t, env)
opts := BackupOptions{}
// first backup
@@ -114,14 +115,14 @@ func TestFindSorting(t *testing.T) {
// first restic find - with default FindOptions{}
results := testRunFind(t, true, FindOptions{}, env.gopts, "testfile")
lines := strings.Split(string(results), "\n")
rtest.Assert(t, len(lines) == 2, "expected two files found in repo (%v), found %d", datafile, len(lines))
rtest.Assert(t, len(lines) == 2, "expected two lines of output, found %d", len(lines))
matches := []testMatches{}
rtest.OK(t, json.Unmarshal(results, &matches))
// run second restic find with --reverse, sort oldest to newest
resultsReverse := testRunFind(t, true, FindOptions{Reverse: true}, env.gopts, "testfile")
lines = strings.Split(string(resultsReverse), "\n")
rtest.Assert(t, len(lines) == 2, "expected two files found in repo (%v), found %d", datafile, len(lines))
rtest.Assert(t, len(lines) == 2, "expected two lines of output, found %d", len(lines))
matchesReverse := []testMatches{}
rtest.OK(t, json.Unmarshal(resultsReverse, &matchesReverse))
@@ -131,3 +132,12 @@ func TestFindSorting(t *testing.T) {
rtest.Assert(t, matches[0].SnapshotID == matchesReverse[1].SnapshotID, "matches should be sorted 1")
rtest.Assert(t, matches[1].SnapshotID == matchesReverse[0].SnapshotID, "matches should be sorted 2")
}
func TestFindInvalidTimeRange(t *testing.T) {
env, cleanup := withTestEnvironment(t)
defer cleanup()
err := runFind(context.TODO(), FindOptions{Oldest: "2026-01-01", Newest: "2020-01-01"}, env.gopts, []string{"quack"}, env.gopts.Term)
rtest.Assert(t, err != nil && err.Error() == "Fatal: --oldest must specify a time before --newest",
"unexpected error message: %v", err)
}

View File

@@ -7,14 +7,16 @@ import (
"io"
"strconv"
"github.com/restic/restic/internal/data"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/ui/termstatus"
"github.com/restic/restic/internal/ui"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
func newForgetCommand() *cobra.Command {
func newForgetCommand(globalOptions *global.Options) *cobra.Command {
var opts ForgetOptions
var pruneOpts PruneOptions
@@ -49,9 +51,8 @@ Exit status is 12 if the password is incorrect.
GroupID: cmdGroupDefault,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
term, cancel := setupTermstatus()
defer cancel()
return runForget(cmd.Context(), opts, pruneOpts, globalOptions, term, args)
finalizeSnapshotFilter(&opts.SnapshotFilter)
return runForget(cmd.Context(), opts, pruneOpts, *globalOptions, globalOptions.Term, args)
},
}
@@ -104,21 +105,21 @@ type ForgetOptions struct {
Weekly ForgetPolicyCount
Monthly ForgetPolicyCount
Yearly ForgetPolicyCount
Within restic.Duration
WithinHourly restic.Duration
WithinDaily restic.Duration
WithinWeekly restic.Duration
WithinMonthly restic.Duration
WithinYearly restic.Duration
KeepTags restic.TagLists
Within data.Duration
WithinHourly data.Duration
WithinDaily data.Duration
WithinWeekly data.Duration
WithinMonthly data.Duration
WithinYearly data.Duration
KeepTags data.TagLists
UnsafeAllowRemoveAll bool
restic.SnapshotFilter
data.SnapshotFilter
Compact bool
// Grouping
GroupBy restic.SnapshotGroupByOptions
GroupBy data.SnapshotGroupByOptions
DryRun bool
Prune bool
}
@@ -149,7 +150,7 @@ func (opts *ForgetOptions) AddFlags(f *pflag.FlagSet) {
initMultiSnapshotFilter(f, &opts.SnapshotFilter, false)
f.BoolVarP(&opts.Compact, "compact", "c", false, "use compact output format")
opts.GroupBy = restic.SnapshotGroupByOptions{Host: true, Path: true}
opts.GroupBy = data.SnapshotGroupByOptions{Host: true, Path: true}
f.VarP(&opts.GroupBy, "group-by", "g", "`group` snapshots by host, paths and/or tags, separated by comma (disable grouping with '')")
f.BoolVarP(&opts.DryRun, "dry-run", "n", false, "do not delete anything, just print what would be done")
f.BoolVar(&opts.Prune, "prune", false, "automatically run the 'prune' command if snapshots have been removed")
@@ -163,7 +164,7 @@ func verifyForgetOptions(opts *ForgetOptions) error {
return errors.Fatal("negative values other than -1 are not allowed for --keep-*")
}
for _, d := range []restic.Duration{opts.Within, opts.WithinHourly, opts.WithinDaily,
for _, d := range []data.Duration{opts.Within, opts.WithinHourly, opts.WithinDaily,
opts.WithinMonthly, opts.WithinWeekly, opts.WithinYearly} {
if d.Hours < 0 || d.Days < 0 || d.Months < 0 || d.Years < 0 {
return errors.Fatal("durations containing negative values are not allowed for --keep-within*")
@@ -173,7 +174,7 @@ func verifyForgetOptions(opts *ForgetOptions) error {
return nil
}
func runForget(ctx context.Context, opts ForgetOptions, pruneOptions PruneOptions, gopts GlobalOptions, term *termstatus.Terminal, args []string) error {
func runForget(ctx context.Context, opts ForgetOptions, pruneOptions PruneOptions, gopts global.Options, term ui.Terminal, args []string) error {
err := verifyForgetOptions(&opts)
if err != nil {
return err
@@ -188,22 +189,17 @@ func runForget(ctx context.Context, opts ForgetOptions, pruneOptions PruneOption
return errors.Fatal("--no-lock is only applicable in combination with --dry-run for forget command")
}
ctx, repo, unlock, err := openWithExclusiveLock(ctx, gopts, opts.DryRun && gopts.NoLock)
printer := ui.NewProgressPrinter(gopts.JSON, gopts.Verbosity, term)
ctx, repo, unlock, err := openWithExclusiveLock(ctx, gopts, opts.DryRun && gopts.NoLock, printer)
if err != nil {
return err
}
defer unlock()
verbosity := gopts.verbosity
if gopts.JSON {
verbosity = 0
}
printer := newTerminalProgressPrinter(verbosity, term)
var snapshots restic.Snapshots
var snapshots data.Snapshots
removeSnIDs := restic.NewIDSet()
for sn := range FindFilteredSnapshots(ctx, repo, repo, &opts.SnapshotFilter, args) {
for sn := range FindFilteredSnapshots(ctx, repo, repo, &opts.SnapshotFilter, args, printer) {
snapshots = append(snapshots, sn)
}
if ctx.Err() != nil {
@@ -218,12 +214,12 @@ func runForget(ctx context.Context, opts ForgetOptions, pruneOptions PruneOption
removeSnIDs.Insert(*sn.ID())
}
} else {
snapshotGroups, _, err := restic.GroupSnapshots(snapshots, opts.GroupBy)
snapshotGroups, _, err := data.GroupSnapshots(snapshots, opts.GroupBy)
if err != nil {
return err
}
policy := restic.ExpirePolicy{
policy := data.ExpirePolicy{
Last: int(opts.Last),
Hourly: int(opts.Hourly),
Daily: int(opts.Daily),
@@ -258,13 +254,13 @@ func runForget(ctx context.Context, opts ForgetOptions, pruneOptions PruneOption
}
if gopts.Verbose >= 1 && !gopts.JSON {
err = PrintSnapshotGroupHeader(globalOptions.stdout, k)
err = PrintSnapshotGroupHeader(gopts.Term.OutputWriter(), k)
if err != nil {
return err
}
}
var key restic.SnapshotGroupKey
var key data.SnapshotGroupKey
if json.Unmarshal([]byte(k), &key) != nil {
return err
}
@@ -274,21 +270,25 @@ func runForget(ctx context.Context, opts ForgetOptions, pruneOptions PruneOption
fg.Host = key.Hostname
fg.Paths = key.Paths
keep, remove, reasons := restic.ApplyPolicy(snapshotGroup, policy)
keep, remove, reasons := data.ApplyPolicy(snapshotGroup, policy)
if !policy.Empty() && len(keep) == 0 {
return fmt.Errorf("refusing to delete last snapshot of snapshot group \"%v\"", key.String())
}
if len(keep) != 0 && !gopts.Quiet && !gopts.JSON {
printer.P("keep %d snapshots:\n", len(keep))
PrintSnapshots(globalOptions.stdout, keep, reasons, opts.Compact)
if err := PrintSnapshots(gopts.Term.OutputWriter(), keep, reasons, opts.Compact); err != nil {
return err
}
printer.P("\n")
}
fg.Keep = asJSONSnapshots(keep)
if len(remove) != 0 && !gopts.Quiet && !gopts.JSON {
printer.P("remove %d snapshots:\n", len(remove))
PrintSnapshots(globalOptions.stdout, remove, nil, opts.Compact)
if err := PrintSnapshots(gopts.Term.OutputWriter(), remove, nil, opts.Compact); err != nil {
return err
}
printer.P("\n")
}
fg.Remove = asJSONSnapshots(remove)
@@ -331,7 +331,7 @@ func runForget(ctx context.Context, opts ForgetOptions, pruneOptions PruneOption
}
if gopts.JSON && len(jsonGroups) > 0 {
err = printJSONForget(globalOptions.stdout, jsonGroups)
err = printJSONForget(gopts.Term.OutputWriter(), jsonGroups)
if err != nil {
return err
}
@@ -348,7 +348,7 @@ func runForget(ctx context.Context, opts ForgetOptions, pruneOptions PruneOption
printer.P("%d snapshots have been removed, running prune\n", len(removeSnIDs))
}
pruneOptions.DryRun = opts.DryRun
return runPruneWithRepo(ctx, pruneOptions, gopts, repo, removeSnIDs, term)
return runPruneWithRepo(ctx, pruneOptions, repo, removeSnIDs, printer)
}
return nil
@@ -364,7 +364,7 @@ type ForgetGroup struct {
Reasons []KeepReason `json:"reasons"`
}
func asJSONSnapshots(list restic.Snapshots) []Snapshot {
func asJSONSnapshots(list data.Snapshots) []Snapshot {
var resultList []Snapshot
for _, sn := range list {
k := Snapshot{
@@ -383,7 +383,7 @@ type KeepReason struct {
Matches []string `json:"matches"`
}
func asJSONKeeps(list []restic.KeepReason) []KeepReason {
func asJSONKeeps(list []data.KeepReason) []KeepReason {
var resultList []KeepReason
for _, keep := range list {
k := KeepReason{

View File

@@ -6,22 +6,22 @@ import (
"strings"
"testing"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/data"
"github.com/restic/restic/internal/global"
rtest "github.com/restic/restic/internal/test"
"github.com/restic/restic/internal/ui/termstatus"
)
func testRunForgetMayFail(gopts GlobalOptions, opts ForgetOptions, args ...string) error {
func testRunForgetMayFail(t testing.TB, gopts global.Options, opts ForgetOptions, args ...string) error {
pruneOpts := PruneOptions{
MaxUnused: "5%",
}
return withTermStatus(gopts, func(ctx context.Context, term *termstatus.Terminal) error {
return runForget(context.TODO(), opts, pruneOpts, gopts, term, args)
return withTermStatus(t, gopts, func(ctx context.Context, gopts global.Options) error {
return runForget(context.TODO(), opts, pruneOpts, gopts, gopts.Term, args)
})
}
func testRunForget(t testing.TB, gopts GlobalOptions, opts ForgetOptions, args ...string) {
rtest.OK(t, testRunForgetMayFail(gopts, opts, args...))
func testRunForget(t testing.TB, gopts global.Options, opts ForgetOptions, args ...string) {
rtest.OK(t, testRunForgetMayFail(t, gopts, opts, args...))
}
func TestRunForgetSafetyNet(t *testing.T) {
@@ -38,27 +38,27 @@ func TestRunForgetSafetyNet(t *testing.T) {
testListSnapshots(t, env.gopts, 2)
// --keep-tags invalid
err := testRunForgetMayFail(env.gopts, ForgetOptions{
KeepTags: restic.TagLists{restic.TagList{"invalid"}},
GroupBy: restic.SnapshotGroupByOptions{Host: true, Path: true},
err := testRunForgetMayFail(t, env.gopts, ForgetOptions{
KeepTags: data.TagLists{data.TagList{"invalid"}},
GroupBy: data.SnapshotGroupByOptions{Host: true, Path: true},
})
rtest.Assert(t, strings.Contains(err.Error(), `refusing to delete last snapshot of snapshot group "host example, path`), "wrong error message got %v", err)
// disallow `forget --unsafe-allow-remove-all`
err = testRunForgetMayFail(env.gopts, ForgetOptions{
err = testRunForgetMayFail(t, env.gopts, ForgetOptions{
UnsafeAllowRemoveAll: true,
})
rtest.Assert(t, strings.Contains(err.Error(), `--unsafe-allow-remove-all is not allowed unless a snapshot filter option is specified`), "wrong error message got %v", err)
// disallow `forget` without options
err = testRunForgetMayFail(env.gopts, ForgetOptions{})
err = testRunForgetMayFail(t, env.gopts, ForgetOptions{})
rtest.Assert(t, strings.Contains(err.Error(), `no policy was specified, no snapshots will be removed`), "wrong error message got %v", err)
// `forget --host example --unsafe-allow-remove-all` should work
testRunForget(t, env.gopts, ForgetOptions{
UnsafeAllowRemoveAll: true,
GroupBy: restic.SnapshotGroupByOptions{Host: true, Path: true},
SnapshotFilter: restic.SnapshotFilter{
GroupBy: data.SnapshotGroupByOptions{Host: true, Path: true},
SnapshotFilter: data.SnapshotFilter{
Hosts: []string{opts.Host},
},
})

View File

@@ -3,7 +3,7 @@ package main
import (
"testing"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/data"
rtest "github.com/restic/restic/internal/test"
"github.com/spf13/pflag"
)
@@ -69,18 +69,18 @@ func TestForgetOptionValues(t *testing.T) {
{ForgetOptions{Weekly: -2}, negValErrorMsg},
{ForgetOptions{Monthly: -2}, negValErrorMsg},
{ForgetOptions{Yearly: -2}, negValErrorMsg},
{ForgetOptions{Within: restic.ParseDurationOrPanic("1y2m3d3h")}, ""},
{ForgetOptions{WithinHourly: restic.ParseDurationOrPanic("1y2m3d3h")}, ""},
{ForgetOptions{WithinDaily: restic.ParseDurationOrPanic("1y2m3d3h")}, ""},
{ForgetOptions{WithinWeekly: restic.ParseDurationOrPanic("1y2m3d3h")}, ""},
{ForgetOptions{WithinMonthly: restic.ParseDurationOrPanic("2y4m6d8h")}, ""},
{ForgetOptions{WithinYearly: restic.ParseDurationOrPanic("2y4m6d8h")}, ""},
{ForgetOptions{Within: restic.ParseDurationOrPanic("-1y2m3d3h")}, negDurationValErrorMsg},
{ForgetOptions{WithinHourly: restic.ParseDurationOrPanic("1y-2m3d3h")}, negDurationValErrorMsg},
{ForgetOptions{WithinDaily: restic.ParseDurationOrPanic("1y2m-3d3h")}, negDurationValErrorMsg},
{ForgetOptions{WithinWeekly: restic.ParseDurationOrPanic("1y2m3d-3h")}, negDurationValErrorMsg},
{ForgetOptions{WithinMonthly: restic.ParseDurationOrPanic("-2y4m6d8h")}, negDurationValErrorMsg},
{ForgetOptions{WithinYearly: restic.ParseDurationOrPanic("2y-4m6d8h")}, negDurationValErrorMsg},
{ForgetOptions{Within: data.ParseDurationOrPanic("1y2m3d3h")}, ""},
{ForgetOptions{WithinHourly: data.ParseDurationOrPanic("1y2m3d3h")}, ""},
{ForgetOptions{WithinDaily: data.ParseDurationOrPanic("1y2m3d3h")}, ""},
{ForgetOptions{WithinWeekly: data.ParseDurationOrPanic("1y2m3d3h")}, ""},
{ForgetOptions{WithinMonthly: data.ParseDurationOrPanic("2y4m6d8h")}, ""},
{ForgetOptions{WithinYearly: data.ParseDurationOrPanic("2y4m6d8h")}, ""},
{ForgetOptions{Within: data.ParseDurationOrPanic("-1y2m3d3h")}, negDurationValErrorMsg},
{ForgetOptions{WithinHourly: data.ParseDurationOrPanic("1y-2m3d3h")}, negDurationValErrorMsg},
{ForgetOptions{WithinDaily: data.ParseDurationOrPanic("1y2m-3d3h")}, negDurationValErrorMsg},
{ForgetOptions{WithinWeekly: data.ParseDurationOrPanic("1y2m3d-3h")}, negDurationValErrorMsg},
{ForgetOptions{WithinMonthly: data.ParseDurationOrPanic("-2y4m6d8h")}, negDurationValErrorMsg},
{ForgetOptions{WithinYearly: data.ParseDurationOrPanic("2y-4m6d8h")}, negDurationValErrorMsg},
}
for _, testCase := range testCases {
@@ -96,7 +96,38 @@ func TestForgetOptionValues(t *testing.T) {
func TestForgetHostnameDefaulting(t *testing.T) {
t.Setenv("RESTIC_HOST", "testhost")
opts := ForgetOptions{}
opts.AddFlags(pflag.NewFlagSet("test", pflag.ContinueOnError))
rtest.Equals(t, []string{"testhost"}, opts.Hosts)
tests := []struct {
name string
args []string
want []string
}{
{
name: "env default when flag not set",
args: nil,
want: []string{"testhost"},
},
{
name: "flag overrides env",
args: []string{"--host", "flaghost"},
want: []string{"flaghost"},
},
{
name: "empty flag clears env",
args: []string{"--host", ""},
want: nil,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
set := pflag.NewFlagSet(tt.name, pflag.ContinueOnError)
opts := ForgetOptions{}
opts.AddFlags(set)
err := set.Parse(tt.args)
rtest.Assert(t, err == nil, "expected no error for input")
finalizeSnapshotFilter(&opts.SnapshotFilter)
rtest.Equals(t, tt.want, opts.Hosts)
})
}
}

View File

@@ -6,12 +6,15 @@ import (
"time"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/ui/progress"
"github.com/spf13/cobra"
"github.com/spf13/cobra/doc"
"github.com/spf13/pflag"
)
func newGenerateCommand() *cobra.Command {
func newGenerateCommand(globalOptions *global.Options) *cobra.Command {
var opts generateOptions
cmd := &cobra.Command{
@@ -29,7 +32,7 @@ Exit status is 1 if there was any error.
`,
DisableAutoGenTag: true,
RunE: func(_ *cobra.Command, args []string) error {
return runGenerate(opts, args)
return runGenerate(opts, *globalOptions, args, globalOptions.Term)
},
}
opts.AddFlags(cmd.Flags())
@@ -52,7 +55,7 @@ func (opts *generateOptions) AddFlags(f *pflag.FlagSet) {
f.StringVar(&opts.PowerShellCompletionFile, "powershell-completion", "", "write powershell completion `file` (`-` for stdout)")
}
func writeManpages(root *cobra.Command, dir string) error {
func writeManpages(root *cobra.Command, dir string, printer progress.Printer) error {
// use a fixed date for the man pages so that generating them is deterministic
date, err := time.Parse("Jan 2006", "Jan 2017")
if err != nil {
@@ -66,14 +69,12 @@ func writeManpages(root *cobra.Command, dir string) error {
Date: &date,
}
Verbosef("writing man pages to directory %v\n", dir)
printer.P("writing man pages to directory %v", dir)
return doc.GenManTree(root, header, dir)
}
func writeCompletion(filename string, shell string, generate func(w io.Writer) error) (err error) {
if stdoutIsTerminal() {
Verbosef("writing %s completion file to %v\n", shell, filename)
}
func writeCompletion(filename string, shell string, generate func(w io.Writer) error, printer progress.Printer, gopts global.Options) (err error) {
printer.PT("writing %s completion file to %v", shell, filename)
var outWriter io.Writer
if filename != "-" {
var outFile *os.File
@@ -84,7 +85,7 @@ func writeCompletion(filename string, shell string, generate func(w io.Writer) e
defer func() { err = outFile.Close() }()
outWriter = outFile
} else {
outWriter = globalOptions.stdout
outWriter = gopts.Term.OutputWriter()
}
err = generate(outWriter)
@@ -110,15 +111,16 @@ func checkStdoutForSingleShell(opts generateOptions) error {
return nil
}
func runGenerate(opts generateOptions, args []string) error {
func runGenerate(opts generateOptions, gopts global.Options, args []string, term ui.Terminal) error {
if len(args) > 0 {
return errors.Fatal("the generate command expects no arguments, only options - please see `restic help generate` for usage and flags")
}
cmdRoot := newRootCommand()
printer := ui.NewProgressPrinter(gopts.JSON, gopts.Verbosity, term)
cmdRoot := newRootCommand(&global.Options{})
if opts.ManDir != "" {
err := writeManpages(cmdRoot, opts.ManDir)
err := writeManpages(cmdRoot, opts.ManDir, printer)
if err != nil {
return err
}
@@ -130,28 +132,28 @@ func runGenerate(opts generateOptions, args []string) error {
}
if opts.BashCompletionFile != "" {
err := writeCompletion(opts.BashCompletionFile, "bash", cmdRoot.GenBashCompletion)
err := writeCompletion(opts.BashCompletionFile, "bash", cmdRoot.GenBashCompletion, printer, gopts)
if err != nil {
return err
}
}
if opts.FishCompletionFile != "" {
err := writeCompletion(opts.FishCompletionFile, "fish", func(w io.Writer) error { return cmdRoot.GenFishCompletion(w, true) })
err := writeCompletion(opts.FishCompletionFile, "fish", func(w io.Writer) error { return cmdRoot.GenFishCompletion(w, true) }, printer, gopts)
if err != nil {
return err
}
}
if opts.ZSHCompletionFile != "" {
err := writeCompletion(opts.ZSHCompletionFile, "zsh", cmdRoot.GenZshCompletion)
err := writeCompletion(opts.ZSHCompletionFile, "zsh", cmdRoot.GenZshCompletion, printer, gopts)
if err != nil {
return err
}
}
if opts.PowerShellCompletionFile != "" {
err := writeCompletion(opts.PowerShellCompletionFile, "powershell", cmdRoot.GenPowerShellCompletion)
err := writeCompletion(opts.PowerShellCompletionFile, "powershell", cmdRoot.GenPowerShellCompletion, printer, gopts)
if err != nil {
return err
}

View File

@@ -1,13 +1,21 @@
package main
import (
"bytes"
"context"
"strings"
"testing"
"github.com/restic/restic/internal/global"
rtest "github.com/restic/restic/internal/test"
)
func testRunGenerate(t testing.TB, gopts global.Options, opts generateOptions) ([]byte, error) {
buf, err := withCaptureStdout(t, gopts, func(ctx context.Context, gopts global.Options) error {
return runGenerate(opts, gopts, []string{}, gopts.Term)
})
return buf.Bytes(), err
}
func TestGenerateStdout(t *testing.T) {
testCases := []struct {
name string
@@ -21,20 +29,14 @@ func TestGenerateStdout(t *testing.T) {
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
buf := bytes.NewBuffer(nil)
globalOptions.stdout = buf
err := runGenerate(tc.opts, []string{})
output, err := testRunGenerate(t, global.Options{}, tc.opts)
rtest.OK(t, err)
completionString := buf.String()
rtest.Assert(t, strings.Contains(completionString, "# "+tc.name+" completion for restic"), "has no expected completion header")
rtest.Assert(t, strings.Contains(string(output), "# "+tc.name+" completion for restic"), "has no expected completion header")
})
}
t.Run("Generate shell completions to stdout for two shells", func(t *testing.T) {
buf := bytes.NewBuffer(nil)
globalOptions.stdout = buf
opts := generateOptions{BashCompletionFile: "-", FishCompletionFile: "-"}
err := runGenerate(opts, []string{})
_, err := testRunGenerate(t, global.Options{}, generateOptions{BashCompletionFile: "-", FishCompletionFile: "-"})
rtest.Assert(t, err != nil, "generate shell completions to stdout for two shells fails")
})
}

View File

@@ -8,14 +8,16 @@ import (
"github.com/restic/chunker"
"github.com/restic/restic/internal/backend/location"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/ui/progress"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
func newInitCommand() *cobra.Command {
func newInitCommand(globalOptions *global.Options) *cobra.Command {
var opts InitOptions
cmd := &cobra.Command{
@@ -33,7 +35,7 @@ Exit status is 1 if there was any error.
GroupID: cmdGroupDefault,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runInit(cmd.Context(), opts, globalOptions, args)
return runInit(cmd.Context(), opts, *globalOptions, args, globalOptions.Term)
},
}
opts.AddFlags(cmd.Flags())
@@ -42,105 +44,78 @@ Exit status is 1 if there was any error.
// InitOptions bundles all options for the init command.
type InitOptions struct {
secondaryRepoOptions
global.SecondaryRepoOptions
CopyChunkerParameters bool
RepositoryVersion string
}
func (opts *InitOptions) AddFlags(f *pflag.FlagSet) {
opts.secondaryRepoOptions.AddFlags(f, "secondary", "to copy chunker parameters from")
opts.SecondaryRepoOptions.AddFlags(f, "secondary", "to copy chunker parameters from")
f.BoolVar(&opts.CopyChunkerParameters, "copy-chunker-params", false, "copy chunker parameters from the secondary repository (useful with the copy command)")
f.StringVar(&opts.RepositoryVersion, "repository-version", "stable", "repository format version to use, allowed values are a format version, 'latest' and 'stable'")
}
func runInit(ctx context.Context, opts InitOptions, gopts GlobalOptions, args []string) error {
func runInit(ctx context.Context, opts InitOptions, gopts global.Options, args []string, term ui.Terminal) error {
if len(args) > 0 {
return errors.Fatal("the init command expects no arguments, only options - please see `restic help init` for usage and flags")
}
printer := ui.NewProgressPrinter(gopts.JSON, gopts.Verbosity, term)
var version uint
if opts.RepositoryVersion == "latest" || opts.RepositoryVersion == "" {
switch opts.RepositoryVersion {
case "latest", "":
version = restic.MaxRepoVersion
} else if opts.RepositoryVersion == "stable" {
case "stable":
version = restic.StableRepoVersion
} else {
default:
v, err := strconv.ParseUint(opts.RepositoryVersion, 10, 32)
if err != nil {
return errors.Fatal("invalid repository version")
}
version = uint(v)
}
if version < restic.MinRepoVersion || version > restic.MaxRepoVersion {
return errors.Fatalf("only repository versions between %v and %v are allowed", restic.MinRepoVersion, restic.MaxRepoVersion)
}
chunkerPolynomial, err := maybeReadChunkerPolynomial(ctx, opts, gopts)
chunkerPolynomial, err := maybeReadChunkerPolynomial(ctx, opts, gopts, printer)
if err != nil {
return err
}
gopts.Repo, err = ReadRepo(gopts)
s, err := global.CreateRepository(ctx, gopts, version, chunkerPolynomial, printer)
if err != nil {
return err
}
gopts.password, err = ReadPasswordTwice(ctx, gopts,
"enter password for new repository: ",
"enter password again: ")
if err != nil {
return err
}
be, err := create(ctx, gopts.Repo, gopts, gopts.extended)
if err != nil {
return errors.Fatalf("create repository at %s failed: %v\n", location.StripPassword(gopts.backends, gopts.Repo), err)
}
s, err := repository.New(be, repository.Options{
Compression: gopts.Compression,
PackSize: gopts.PackSize * 1024 * 1024,
})
if err != nil {
return errors.Fatal(err.Error())
}
err = s.Init(ctx, version, gopts.password, chunkerPolynomial)
if err != nil {
return errors.Fatalf("create key in repository at %s failed: %v\n", location.StripPassword(gopts.backends, gopts.Repo), err)
return errors.Fatalf("%s", err)
}
if !gopts.JSON {
Verbosef("created restic repository %v at %s", s.Config().ID[:10], location.StripPassword(gopts.backends, gopts.Repo))
printer.P("created restic repository %v at %s", s.Config().ID[:10], location.StripPassword(gopts.Backends, gopts.Repo))
if opts.CopyChunkerParameters && chunkerPolynomial != nil {
Verbosef(" with chunker parameters copied from secondary repository\n")
} else {
Verbosef("\n")
printer.P(" with chunker parameters copied from secondary repository")
}
Verbosef("\n")
Verbosef("Please note that knowledge of your password is required to access\n")
Verbosef("the repository. Losing your password means that your data is\n")
Verbosef("irrecoverably lost.\n")
printer.P("")
printer.P("Please note that knowledge of your password is required to access")
printer.P("the repository. Losing your password means that your data is")
printer.P("irrecoverably lost.")
} else {
status := initSuccess{
MessageType: "initialized",
ID: s.Config().ID,
Repository: location.StripPassword(gopts.backends, gopts.Repo),
Repository: location.StripPassword(gopts.Backends, gopts.Repo),
}
return json.NewEncoder(globalOptions.stdout).Encode(status)
return json.NewEncoder(gopts.Term.OutputWriter()).Encode(status)
}
return nil
}
func maybeReadChunkerPolynomial(ctx context.Context, opts InitOptions, gopts GlobalOptions) (*chunker.Pol, error) {
func maybeReadChunkerPolynomial(ctx context.Context, opts InitOptions, gopts global.Options, printer progress.Printer) (*chunker.Pol, error) {
if opts.CopyChunkerParameters {
otherGopts, _, err := fillSecondaryGlobalOpts(ctx, opts.secondaryRepoOptions, gopts, "secondary")
otherGopts, _, err := opts.SecondaryRepoOptions.FillGlobalOpts(ctx, gopts, "secondary")
if err != nil {
return nil, err
}
otherRepo, err := OpenRepository(ctx, otherGopts)
otherRepo, err := global.OpenRepository(ctx, otherGopts, printer)
if err != nil {
return nil, err
}

View File

@@ -6,22 +6,27 @@ import (
"path/filepath"
"testing"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
rtest "github.com/restic/restic/internal/test"
"github.com/restic/restic/internal/ui/progress"
)
func testRunInit(t testing.TB, opts GlobalOptions) {
func testRunInit(t testing.TB, gopts global.Options) {
repository.TestUseLowSecurityKDFParameters(t)
restic.TestDisableCheckPolynomial(t)
restic.TestSetLockTimeout(t, 0)
rtest.OK(t, runInit(context.TODO(), InitOptions{}, opts, nil))
t.Logf("repository initialized at %v", opts.Repo)
err := withTermStatus(t, gopts, func(ctx context.Context, gopts global.Options) error {
return runInit(ctx, InitOptions{}, gopts, nil, gopts.Term)
})
rtest.OK(t, err)
t.Logf("repository initialized at %v", gopts.Repo)
// create temporary junk files to verify that restic does not trip over them
for _, path := range []string{"index", "snapshots", "keys", "locks", filepath.Join("data", "00")} {
rtest.OK(t, os.WriteFile(filepath.Join(opts.Repo, path, "tmp12345"), []byte("junk file"), 0o600))
rtest.OK(t, os.WriteFile(filepath.Join(gopts.Repo, path, "tmp12345"), []byte("junk file"), 0o600))
}
}
@@ -34,20 +39,34 @@ func TestInitCopyChunkerParams(t *testing.T) {
testRunInit(t, env2.gopts)
initOpts := InitOptions{
secondaryRepoOptions: secondaryRepoOptions{
SecondaryRepoOptions: global.SecondaryRepoOptions{
Repo: env2.gopts.Repo,
password: env2.gopts.password,
Password: env2.gopts.Password,
},
}
rtest.Assert(t, runInit(context.TODO(), initOpts, env.gopts, nil) != nil, "expected invalid init options to fail")
err := withTermStatus(t, env.gopts, func(ctx context.Context, gopts global.Options) error {
return runInit(ctx, initOpts, gopts, nil, gopts.Term)
})
rtest.Assert(t, err != nil, "expected invalid init options to fail")
initOpts.CopyChunkerParameters = true
rtest.OK(t, runInit(context.TODO(), initOpts, env.gopts, nil))
repo, err := OpenRepository(context.TODO(), env.gopts)
err = withTermStatus(t, env.gopts, func(ctx context.Context, gopts global.Options) error {
return runInit(ctx, initOpts, gopts, nil, gopts.Term)
})
rtest.OK(t, err)
otherRepo, err := OpenRepository(context.TODO(), env2.gopts)
var repo *repository.Repository
err = withTermStatus(t, env.gopts, func(ctx context.Context, gopts global.Options) error {
repo, err = global.OpenRepository(ctx, gopts, &progress.NoopPrinter{})
return err
})
rtest.OK(t, err)
var otherRepo *repository.Repository
err = withTermStatus(t, env2.gopts, func(ctx context.Context, gopts global.Options) error {
otherRepo, err = global.OpenRepository(ctx, gopts, &progress.NoopPrinter{})
return err
})
rtest.OK(t, err)
rtest.Assert(t, repo.Config().ChunkerPolynomial == otherRepo.Config().ChunkerPolynomial,

View File

@@ -1,10 +1,11 @@
package main
import (
"github.com/restic/restic/internal/global"
"github.com/spf13/cobra"
)
func newKeyCommand() *cobra.Command {
func newKeyCommand(globalOptions *global.Options) *cobra.Command {
cmd := &cobra.Command{
Use: "key",
Short: "Manage keys (passwords)",
@@ -17,10 +18,10 @@ per repository.
}
cmd.AddCommand(
newKeyAddCommand(),
newKeyListCommand(),
newKeyPasswdCommand(),
newKeyRemoveCommand(),
newKeyAddCommand(globalOptions),
newKeyListCommand(globalOptions),
newKeyPasswdCommand(globalOptions),
newKeyRemoveCommand(globalOptions),
)
return cmd
}

View File

@@ -5,12 +5,15 @@ import (
"fmt"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/ui/progress"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
func newKeyAddCommand() *cobra.Command {
func newKeyAddCommand(globalOptions *global.Options) *cobra.Command {
var opts KeyAddOptions
cmd := &cobra.Command{
@@ -30,7 +33,7 @@ Exit status is 12 if the password is incorrect.
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runKeyAdd(cmd.Context(), globalOptions, opts, args)
return runKeyAdd(cmd.Context(), *globalOptions, opts, args, globalOptions.Term)
},
}
@@ -52,21 +55,22 @@ func (opts *KeyAddOptions) Add(flags *pflag.FlagSet) {
flags.StringVarP(&opts.Hostname, "host", "", "", "the hostname for new key")
}
func runKeyAdd(ctx context.Context, gopts GlobalOptions, opts KeyAddOptions, args []string) error {
func runKeyAdd(ctx context.Context, gopts global.Options, opts KeyAddOptions, args []string, term ui.Terminal) error {
if len(args) > 0 {
return fmt.Errorf("the key add command expects no arguments, only options - please see `restic help key add` for usage and flags")
}
ctx, repo, unlock, err := openWithAppendLock(ctx, gopts, false)
printer := ui.NewProgressPrinter(false, gopts.Verbosity, term)
ctx, repo, unlock, err := openWithAppendLock(ctx, gopts, false, printer)
if err != nil {
return err
}
defer unlock()
return addKey(ctx, repo, gopts, opts)
return addKey(ctx, repo, gopts, opts, printer)
}
func addKey(ctx context.Context, repo *repository.Repository, gopts GlobalOptions, opts KeyAddOptions) error {
func addKey(ctx context.Context, repo *repository.Repository, gopts global.Options, opts KeyAddOptions, printer progress.Printer) error {
pw, err := getNewPassword(ctx, gopts, opts.NewPasswordFile, opts.InsecureNoPassword)
if err != nil {
return err
@@ -74,7 +78,7 @@ func addKey(ctx context.Context, repo *repository.Repository, gopts GlobalOption
id, err := repository.AddKey(ctx, repo, pw, opts.Username, opts.Hostname, repo.Key())
if err != nil {
return errors.Fatalf("creating new key failed: %v\n", err)
return errors.Fatalf("creating new key failed: %v", err)
}
err = switchToNewKeyAndRemoveIfBroken(ctx, repo, id, pw)
@@ -82,7 +86,7 @@ func addKey(ctx context.Context, repo *repository.Repository, gopts GlobalOption
return err
}
Verbosef("saved new key with ID %s\n", id.ID())
printer.P("saved new key with ID %s", id.ID())
return nil
}
@@ -90,7 +94,7 @@ func addKey(ctx context.Context, repo *repository.Repository, gopts GlobalOption
// testKeyNewPassword is used to set a new password during integration testing.
var testKeyNewPassword string
func getNewPassword(ctx context.Context, gopts GlobalOptions, newPasswordFile string, insecureNoPassword bool) (string, error) {
func getNewPassword(ctx context.Context, gopts global.Options, newPasswordFile string, insecureNoPassword bool) (string, error) {
if testKeyNewPassword != "" {
return testKeyNewPassword, nil
}
@@ -103,7 +107,7 @@ func getNewPassword(ctx context.Context, gopts GlobalOptions, newPasswordFile st
}
if newPasswordFile != "" {
password, err := loadPasswordFromFile(newPasswordFile)
password, err := global.LoadPasswordFromFile(newPasswordFile)
if err != nil {
return "", err
}
@@ -116,11 +120,11 @@ func getNewPassword(ctx context.Context, gopts GlobalOptions, newPasswordFile st
// Since we already have an open repository, temporary remove the password
// to prompt the user for the passwd.
newopts := gopts
newopts.password = ""
newopts.Password = ""
// empty passwords are already handled above
newopts.InsecureNoPassword = false
return ReadPasswordTwice(ctx, newopts,
return global.ReadPasswordTwice(ctx, newopts,
"enter new password: ",
"enter password again: ")
}

View File

@@ -10,13 +10,15 @@ import (
"testing"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/repository"
rtest "github.com/restic/restic/internal/test"
"github.com/restic/restic/internal/ui/progress"
)
func testRunKeyListOtherIDs(t testing.TB, gopts GlobalOptions) []string {
buf, err := withCaptureStdout(func() error {
return runKeyList(context.TODO(), gopts, []string{})
func testRunKeyListOtherIDs(t testing.TB, gopts global.Options) []string {
buf, err := withCaptureStdout(t, gopts, func(ctx context.Context, gopts global.Options) error {
return runKeyList(ctx, gopts, []string{}, gopts.Term)
})
rtest.OK(t, err)
@@ -33,49 +35,64 @@ func testRunKeyListOtherIDs(t testing.TB, gopts GlobalOptions) []string {
return IDs
}
func testRunKeyAddNewKey(t testing.TB, newPassword string, gopts GlobalOptions) {
func testRunKeyAddNewKey(t testing.TB, newPassword string, gopts global.Options) {
testKeyNewPassword = newPassword
defer func() {
testKeyNewPassword = ""
}()
rtest.OK(t, runKeyAdd(context.TODO(), gopts, KeyAddOptions{}, []string{}))
err := withTermStatus(t, gopts, func(ctx context.Context, gopts global.Options) error {
return runKeyAdd(ctx, gopts, KeyAddOptions{}, []string{}, gopts.Term)
})
rtest.OK(t, err)
}
func testRunKeyAddNewKeyUserHost(t testing.TB, gopts GlobalOptions) {
func testRunKeyAddNewKeyUserHost(t testing.TB, gopts global.Options) {
testKeyNewPassword = "john's geheimnis"
defer func() {
testKeyNewPassword = ""
}()
t.Log("adding key for john@example.com")
rtest.OK(t, runKeyAdd(context.TODO(), gopts, KeyAddOptions{
Username: "john",
Hostname: "example.com",
}, []string{}))
repo, err := OpenRepository(context.TODO(), gopts)
rtest.OK(t, err)
key, err := repository.SearchKey(context.TODO(), repo, testKeyNewPassword, 2, "")
err := withTermStatus(t, gopts, func(ctx context.Context, gopts global.Options) error {
return runKeyAdd(ctx, gopts, KeyAddOptions{
Username: "john",
Hostname: "example.com",
}, []string{}, gopts.Term)
})
rtest.OK(t, err)
rtest.Equals(t, "john", key.Username)
rtest.Equals(t, "example.com", key.Hostname)
_ = withTermStatus(t, gopts, func(ctx context.Context, gopts global.Options) error {
repo, err := global.OpenRepository(ctx, gopts, &progress.NoopPrinter{})
rtest.OK(t, err)
key, err := repository.SearchKey(ctx, repo, testKeyNewPassword, 2, "")
rtest.OK(t, err)
rtest.Equals(t, "john", key.Username)
rtest.Equals(t, "example.com", key.Hostname)
return nil
})
}
func testRunKeyPasswd(t testing.TB, newPassword string, gopts GlobalOptions) {
func testRunKeyPasswd(t testing.TB, newPassword string, gopts global.Options) {
testKeyNewPassword = newPassword
defer func() {
testKeyNewPassword = ""
}()
rtest.OK(t, runKeyPasswd(context.TODO(), gopts, KeyPasswdOptions{}, []string{}))
err := withTermStatus(t, gopts, func(ctx context.Context, gopts global.Options) error {
return runKeyPasswd(ctx, gopts, KeyPasswdOptions{}, []string{}, gopts.Term)
})
rtest.OK(t, err)
}
func testRunKeyRemove(t testing.TB, gopts GlobalOptions, IDs []string) {
func testRunKeyRemove(t testing.TB, gopts global.Options, IDs []string) {
t.Logf("remove %d keys: %q\n", len(IDs), IDs)
for _, id := range IDs {
rtest.OK(t, runKeyRemove(context.TODO(), gopts, []string{id}))
err := withTermStatus(t, gopts, func(ctx context.Context, gopts global.Options) error {
return runKeyRemove(ctx, gopts, []string{id}, gopts.Term)
})
rtest.OK(t, err)
}
}
@@ -87,25 +104,28 @@ func TestKeyAddRemove(t *testing.T) {
env, cleanup := withTestEnvironment(t)
// must list keys more than once
env.gopts.backendTestHook = nil
env.gopts.BackendTestHook = nil
defer cleanup()
testRunInit(t, env.gopts)
testRunKeyPasswd(t, "geheim2", env.gopts)
env.gopts.password = "geheim2"
t.Logf("changed password to %q", env.gopts.password)
env.gopts.Password = "geheim2"
t.Logf("changed password to %q", env.gopts.Password)
for _, newPassword := range passwordList {
testRunKeyAddNewKey(t, newPassword, env.gopts)
t.Logf("added new password %q", newPassword)
env.gopts.password = newPassword
env.gopts.Password = newPassword
testRunKeyRemove(t, env.gopts, testRunKeyListOtherIDs(t, env.gopts))
}
env.gopts.password = passwordList[len(passwordList)-1]
t.Logf("testing access with last password %q\n", env.gopts.password)
rtest.OK(t, runKeyList(context.TODO(), env.gopts, []string{}))
env.gopts.Password = passwordList[len(passwordList)-1]
t.Logf("testing access with last password %q\n", env.gopts.Password)
err := withTermStatus(t, env.gopts, func(ctx context.Context, gopts global.Options) error {
return runKeyList(ctx, gopts, []string{}, gopts.Term)
})
rtest.OK(t, err)
testRunCheck(t, env.gopts)
testRunKeyAddNewKeyUserHost(t, env.gopts)
@@ -116,33 +136,40 @@ func TestKeyAddInvalid(t *testing.T) {
defer cleanup()
testRunInit(t, env.gopts)
err := runKeyAdd(context.TODO(), env.gopts, KeyAddOptions{
NewPasswordFile: "some-file",
InsecureNoPassword: true,
}, []string{})
err := withTermStatus(t, env.gopts, func(ctx context.Context, gopts global.Options) error {
return runKeyAdd(ctx, gopts, KeyAddOptions{
NewPasswordFile: "some-file",
InsecureNoPassword: true,
}, []string{}, gopts.Term)
})
rtest.Assert(t, strings.Contains(err.Error(), "only either"), "unexpected error message, got %q", err)
pwfile := filepath.Join(t.TempDir(), "pwfile")
rtest.OK(t, os.WriteFile(pwfile, []byte{}, 0o666))
err = runKeyAdd(context.TODO(), env.gopts, KeyAddOptions{
NewPasswordFile: pwfile,
}, []string{})
err = withTermStatus(t, env.gopts, func(ctx context.Context, gopts global.Options) error {
return runKeyAdd(ctx, gopts, KeyAddOptions{
NewPasswordFile: pwfile,
}, []string{}, gopts.Term)
})
rtest.Assert(t, strings.Contains(err.Error(), "an empty password is not allowed by default"), "unexpected error message, got %q", err)
}
func TestKeyAddEmpty(t *testing.T) {
env, cleanup := withTestEnvironment(t)
// must list keys more than once
env.gopts.backendTestHook = nil
env.gopts.BackendTestHook = nil
defer cleanup()
testRunInit(t, env.gopts)
rtest.OK(t, runKeyAdd(context.TODO(), env.gopts, KeyAddOptions{
InsecureNoPassword: true,
}, []string{}))
err := withTermStatus(t, env.gopts, func(ctx context.Context, gopts global.Options) error {
return runKeyAdd(ctx, gopts, KeyAddOptions{
InsecureNoPassword: true,
}, []string{}, gopts.Term)
})
rtest.OK(t, err)
env.gopts.password = ""
env.gopts.Password = ""
env.gopts.InsecureNoPassword = true
testRunCheck(t, env.gopts)
@@ -161,7 +188,7 @@ func TestKeyProblems(t *testing.T) {
defer cleanup()
testRunInit(t, env.gopts)
env.gopts.backendTestHook = func(r backend.Backend) (backend.Backend, error) {
env.gopts.BackendTestHook = func(r backend.Backend) (backend.Backend, error) {
return &emptySaveBackend{r}, nil
}
@@ -170,16 +197,23 @@ func TestKeyProblems(t *testing.T) {
testKeyNewPassword = ""
}()
err := runKeyPasswd(context.TODO(), env.gopts, KeyPasswdOptions{}, []string{})
err := withTermStatus(t, env.gopts, func(ctx context.Context, gopts global.Options) error {
return runKeyPasswd(ctx, gopts, KeyPasswdOptions{}, []string{}, gopts.Term)
})
t.Log(err)
rtest.Assert(t, err != nil, "expected passwd change to fail")
err = runKeyAdd(context.TODO(), env.gopts, KeyAddOptions{}, []string{})
err = withTermStatus(t, env.gopts, func(ctx context.Context, gopts global.Options) error {
return runKeyAdd(ctx, gopts, KeyAddOptions{}, []string{}, gopts.Term)
})
t.Log(err)
rtest.Assert(t, err != nil, "expected key adding to fail")
t.Logf("testing access with initial password %q\n", env.gopts.password)
rtest.OK(t, runKeyList(context.TODO(), env.gopts, []string{}))
t.Logf("testing access with initial password %q\n", env.gopts.Password)
err = withTermStatus(t, env.gopts, func(ctx context.Context, gopts global.Options) error {
return runKeyList(ctx, gopts, []string{}, gopts.Term)
})
rtest.OK(t, err)
testRunCheck(t, env.gopts)
}
@@ -188,27 +222,37 @@ func TestKeyCommandInvalidArguments(t *testing.T) {
defer cleanup()
testRunInit(t, env.gopts)
env.gopts.backendTestHook = func(r backend.Backend) (backend.Backend, error) {
env.gopts.BackendTestHook = func(r backend.Backend) (backend.Backend, error) {
return &emptySaveBackend{r}, nil
}
err := runKeyAdd(context.TODO(), env.gopts, KeyAddOptions{}, []string{"johndoe"})
err := withTermStatus(t, env.gopts, func(ctx context.Context, gopts global.Options) error {
return runKeyAdd(ctx, gopts, KeyAddOptions{}, []string{"johndoe"}, gopts.Term)
})
t.Log(err)
rtest.Assert(t, err != nil && strings.Contains(err.Error(), "no arguments"), "unexpected error for key add: %v", err)
err = runKeyPasswd(context.TODO(), env.gopts, KeyPasswdOptions{}, []string{"johndoe"})
err = withTermStatus(t, env.gopts, func(ctx context.Context, gopts global.Options) error {
return runKeyPasswd(ctx, gopts, KeyPasswdOptions{}, []string{"johndoe"}, gopts.Term)
})
t.Log(err)
rtest.Assert(t, err != nil && strings.Contains(err.Error(), "no arguments"), "unexpected error for key passwd: %v", err)
err = runKeyList(context.TODO(), env.gopts, []string{"johndoe"})
err = withTermStatus(t, env.gopts, func(ctx context.Context, gopts global.Options) error {
return runKeyList(ctx, gopts, []string{"johndoe"}, gopts.Term)
})
t.Log(err)
rtest.Assert(t, err != nil && strings.Contains(err.Error(), "no arguments"), "unexpected error for key list: %v", err)
err = runKeyRemove(context.TODO(), env.gopts, []string{})
err = withTermStatus(t, env.gopts, func(ctx context.Context, gopts global.Options) error {
return runKeyRemove(ctx, gopts, []string{}, gopts.Term)
})
t.Log(err)
rtest.Assert(t, err != nil && strings.Contains(err.Error(), "one argument"), "unexpected error for key remove: %v", err)
err = runKeyRemove(context.TODO(), env.gopts, []string{"john", "doe"})
err = withTermStatus(t, env.gopts, func(ctx context.Context, gopts global.Options) error {
return runKeyRemove(ctx, gopts, []string{"john", "doe"}, gopts.Term)
})
t.Log(err)
rtest.Assert(t, err != nil && strings.Contains(err.Error(), "one argument"), "unexpected error for key remove: %v", err)
}

View File

@@ -6,13 +6,16 @@ import (
"fmt"
"sync"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/ui/progress"
"github.com/restic/restic/internal/ui/table"
"github.com/spf13/cobra"
)
func newKeyListCommand() *cobra.Command {
func newKeyListCommand(globalOptions *global.Options) *cobra.Command {
cmd := &cobra.Command{
Use: "list",
Short: "List keys (passwords)",
@@ -32,27 +35,28 @@ Exit status is 12 if the password is incorrect.
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runKeyList(cmd.Context(), globalOptions, args)
return runKeyList(cmd.Context(), *globalOptions, args, globalOptions.Term)
},
}
return cmd
}
func runKeyList(ctx context.Context, gopts GlobalOptions, args []string) error {
func runKeyList(ctx context.Context, gopts global.Options, args []string, term ui.Terminal) error {
if len(args) > 0 {
return fmt.Errorf("the key list command expects no arguments, only options - please see `restic help key list` for usage and flags")
}
ctx, repo, unlock, err := openWithReadLock(ctx, gopts, gopts.NoLock)
printer := ui.NewProgressPrinter(gopts.JSON, gopts.Verbosity, term)
ctx, repo, unlock, err := openWithReadLock(ctx, gopts, gopts.NoLock, printer)
if err != nil {
return err
}
defer unlock()
return listKeys(ctx, repo, gopts)
return listKeys(ctx, repo, gopts, printer)
}
func listKeys(ctx context.Context, s *repository.Repository, gopts GlobalOptions) error {
func listKeys(ctx context.Context, s *repository.Repository, gopts global.Options, printer progress.Printer) error {
type keyInfo struct {
Current bool `json:"current"`
ID string `json:"id"`
@@ -68,7 +72,7 @@ func listKeys(ctx context.Context, s *repository.Repository, gopts GlobalOptions
err := restic.ParallelList(ctx, s, restic.KeyFile, s.Connections(), func(ctx context.Context, id restic.ID, _ int64) error {
k, err := repository.LoadKey(ctx, s, id)
if err != nil {
Warnf("LoadKey() failed: %v\n", err)
printer.E("LoadKey() failed: %v", err)
return nil
}
@@ -78,7 +82,7 @@ func listKeys(ctx context.Context, s *repository.Repository, gopts GlobalOptions
ShortID: id.Str(),
UserName: k.Username,
HostName: k.Hostname,
Created: k.Created.Local().Format(TimeFormat),
Created: k.Created.Local().Format(global.TimeFormat),
}
m.Lock()
@@ -92,7 +96,7 @@ func listKeys(ctx context.Context, s *repository.Repository, gopts GlobalOptions
}
if gopts.JSON {
return json.NewEncoder(globalOptions.stdout).Encode(keys)
return json.NewEncoder(gopts.Term.OutputWriter()).Encode(keys)
}
tab := table.New()
@@ -105,5 +109,5 @@ func listKeys(ctx context.Context, s *repository.Repository, gopts GlobalOptions
tab.AddRow(key)
}
return tab.Write(globalOptions.stdout)
return tab.Write(gopts.Term.OutputWriter())
}

View File

@@ -5,12 +5,15 @@ import (
"fmt"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/ui/progress"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
func newKeyPasswdCommand() *cobra.Command {
func newKeyPasswdCommand(globalOptions *global.Options) *cobra.Command {
var opts KeyPasswdOptions
cmd := &cobra.Command{
@@ -31,7 +34,7 @@ Exit status is 12 if the password is incorrect.
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runKeyPasswd(cmd.Context(), globalOptions, opts, args)
return runKeyPasswd(cmd.Context(), *globalOptions, opts, args, globalOptions.Term)
},
}
@@ -47,21 +50,22 @@ func (opts *KeyPasswdOptions) AddFlags(flags *pflag.FlagSet) {
opts.KeyAddOptions.Add(flags)
}
func runKeyPasswd(ctx context.Context, gopts GlobalOptions, opts KeyPasswdOptions, args []string) error {
func runKeyPasswd(ctx context.Context, gopts global.Options, opts KeyPasswdOptions, args []string, term ui.Terminal) error {
if len(args) > 0 {
return fmt.Errorf("the key passwd command expects no arguments, only options - please see `restic help key passwd` for usage and flags")
}
ctx, repo, unlock, err := openWithExclusiveLock(ctx, gopts, false)
printer := ui.NewProgressPrinter(false, gopts.Verbosity, term)
ctx, repo, unlock, err := openWithExclusiveLock(ctx, gopts, false, printer)
if err != nil {
return err
}
defer unlock()
return changePassword(ctx, repo, gopts, opts)
return changePassword(ctx, repo, gopts, opts, printer)
}
func changePassword(ctx context.Context, repo *repository.Repository, gopts GlobalOptions, opts KeyPasswdOptions) error {
func changePassword(ctx context.Context, repo *repository.Repository, gopts global.Options, opts KeyPasswdOptions, printer progress.Printer) error {
pw, err := getNewPassword(ctx, gopts, opts.NewPasswordFile, opts.InsecureNoPassword)
if err != nil {
return err
@@ -69,7 +73,7 @@ func changePassword(ctx context.Context, repo *repository.Repository, gopts Glob
id, err := repository.AddKey(ctx, repo, pw, "", "", repo.Key())
if err != nil {
return errors.Fatalf("creating new key failed: %v\n", err)
return errors.Fatalf("creating new key failed: %v", err)
}
oldID := repo.KeyID()
@@ -83,7 +87,7 @@ func changePassword(ctx context.Context, repo *repository.Repository, gopts Glob
return err
}
Verbosef("saved new key as %s\n", id)
printer.P("saved new key as %s", id)
return nil
}

View File

@@ -5,12 +5,15 @@ import (
"fmt"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/ui/progress"
"github.com/spf13/cobra"
)
func newKeyRemoveCommand() *cobra.Command {
func newKeyRemoveCommand(globalOptions *global.Options) *cobra.Command {
cmd := &cobra.Command{
Use: "remove [ID]",
Short: "Remove key ID (password) from the repository.",
@@ -29,27 +32,28 @@ Exit status is 12 if the password is incorrect.
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runKeyRemove(cmd.Context(), globalOptions, args)
return runKeyRemove(cmd.Context(), *globalOptions, args, globalOptions.Term)
},
}
return cmd
}
func runKeyRemove(ctx context.Context, gopts GlobalOptions, args []string) error {
func runKeyRemove(ctx context.Context, gopts global.Options, args []string, term ui.Terminal) error {
if len(args) != 1 {
return fmt.Errorf("key remove expects one argument as the key id")
}
ctx, repo, unlock, err := openWithExclusiveLock(ctx, gopts, false)
printer := ui.NewProgressPrinter(gopts.JSON, gopts.Verbosity, term)
ctx, repo, unlock, err := openWithExclusiveLock(ctx, gopts, false, printer)
if err != nil {
return err
}
defer unlock()
return deleteKey(ctx, repo, args[0])
return deleteKey(ctx, repo, args[0], printer)
}
func deleteKey(ctx context.Context, repo *repository.Repository, idPrefix string) error {
func deleteKey(ctx context.Context, repo *repository.Repository, idPrefix string, printer progress.Printer) error {
id, err := restic.Find(ctx, repo, restic.KeyFile, idPrefix)
if err != nil {
return err
@@ -64,6 +68,6 @@ func deleteKey(ctx context.Context, repo *repository.Repository, idPrefix string
return err
}
Verbosef("removed key %v\n", id)
printer.P("removed key %v", id)
return nil
}

View File

@@ -5,13 +5,15 @@ import (
"strings"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/repository/index"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/ui"
"github.com/spf13/cobra"
)
func newListCommand() *cobra.Command {
func newListCommand(globalOptions *global.Options) *cobra.Command {
var listAllowedArgs = []string{"blobs", "packs", "index", "snapshots", "keys", "locks"}
var listAllowedArgsUseString = strings.Join(listAllowedArgs, "|")
@@ -33,7 +35,7 @@ Exit status is 12 if the password is incorrect.
DisableAutoGenTag: true,
GroupID: cmdGroupDefault,
RunE: func(cmd *cobra.Command, args []string) error {
return runList(cmd.Context(), globalOptions, args)
return runList(cmd.Context(), *globalOptions, args, globalOptions.Term)
},
ValidArgs: listAllowedArgs,
Args: cobra.MatchAll(cobra.ExactArgs(1), cobra.OnlyValidArgs),
@@ -41,12 +43,14 @@ Exit status is 12 if the password is incorrect.
return cmd
}
func runList(ctx context.Context, gopts GlobalOptions, args []string) error {
func runList(ctx context.Context, gopts global.Options, args []string, term ui.Terminal) error {
printer := ui.NewProgressPrinter(false, gopts.Verbosity, term)
if len(args) != 1 {
return errors.Fatal("type not specified")
}
ctx, repo, unlock, err := openWithReadLock(ctx, gopts, gopts.NoLock || args[0] == "locks")
ctx, repo, unlock, err := openWithReadLock(ctx, gopts, gopts.NoLock || args[0] == "locks", printer)
if err != nil {
return err
}
@@ -69,16 +73,20 @@ func runList(ctx context.Context, gopts GlobalOptions, args []string) error {
if err != nil {
return err
}
return idx.Each(ctx, func(blobs restic.PackedBlob) {
Printf("%v %v\n", blobs.Type, blobs.ID)
})
for blobs := range idx.Values() {
if ctx.Err() != nil {
return ctx.Err()
}
printer.S("%v %v", blobs.Type, blobs.ID)
}
return nil
})
default:
return errors.Fatal("invalid type")
}
return repo.List(ctx, t, func(id restic.ID, _ int64) error {
Printf("%s\n", id)
printer.S("%s", id)
return nil
})
}

View File

@@ -4,15 +4,19 @@ import (
"bufio"
"context"
"io"
"path/filepath"
"strings"
"testing"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/restic"
rtest "github.com/restic/restic/internal/test"
"github.com/restic/restic/internal/ui"
)
func testRunList(t testing.TB, tpe string, opts GlobalOptions) restic.IDs {
buf, err := withCaptureStdout(func() error {
return runList(context.TODO(), opts, []string{tpe})
func testRunList(t testing.TB, gopts global.Options, tpe string) restic.IDs {
buf, err := withCaptureStdout(t, gopts, func(ctx context.Context, gopts global.Options) error {
return runList(ctx, gopts, []string{tpe}, gopts.Term)
})
rtest.OK(t, err)
return parseIDsFromReader(t, buf)
@@ -24,21 +28,77 @@ func parseIDsFromReader(t testing.TB, rd io.Reader) restic.IDs {
sc := bufio.NewScanner(rd)
for sc.Scan() {
id, err := restic.ParseID(sc.Text())
if err != nil {
t.Logf("parse id %v: %v", sc.Text(), err)
continue
if len(sc.Text()) == 64 {
id, err := restic.ParseID(sc.Text())
if err != nil {
t.Logf("parse id %v: %v", sc.Text(), err)
continue
}
IDs = append(IDs, id)
} else {
// 'list blobs' is different because it lists the blobs together with the blob type
// e.g. "tree ac08ce34ba4f8123618661bef2425f7028ffb9ac740578a3ee88684d2523fee8"
parts := strings.Split(sc.Text(), " ")
id, err := restic.ParseID(parts[len(parts)-1])
if err != nil {
t.Logf("parse id %v: %v", sc.Text(), err)
continue
}
IDs = append(IDs, id)
}
IDs = append(IDs, id)
}
return IDs
}
func testListSnapshots(t testing.TB, opts GlobalOptions, expected int) restic.IDs {
func testListSnapshots(t testing.TB, gopts global.Options, expected int) restic.IDs {
t.Helper()
snapshotIDs := testRunList(t, "snapshots", opts)
snapshotIDs := testRunList(t, gopts, "snapshots")
rtest.Assert(t, len(snapshotIDs) == expected, "expected %v snapshot, got %v", expected, snapshotIDs)
return snapshotIDs
}
// extract blob set from repository index
func testListBlobs(t testing.TB, gopts global.Options) (blobSetFromIndex restic.IDSet) {
err := withTermStatus(t, gopts, func(ctx context.Context, gopts global.Options) error {
printer := ui.NewProgressPrinter(gopts.JSON, gopts.Verbosity, gopts.Term)
_, repo, unlock, err := openWithReadLock(ctx, gopts, false, printer)
rtest.OK(t, err)
defer unlock()
// make sure the index is loaded
rtest.OK(t, repo.LoadIndex(ctx, nil))
// get blobs from index
blobSetFromIndex = restic.NewIDSet()
rtest.OK(t, repo.ListBlobs(ctx, func(blob restic.PackedBlob) {
blobSetFromIndex.Insert(blob.ID)
}))
return nil
})
rtest.OK(t, err)
return blobSetFromIndex
}
func TestListBlobs(t *testing.T) {
env, cleanup := withTestEnvironment(t)
defer cleanup()
testSetupBackupData(t, env)
opts := BackupOptions{}
// first backup
testRunBackup(t, "", []string{filepath.Join(env.testdata, "0", "0", "9")}, opts, env.gopts)
testListSnapshots(t, env.gopts, 1)
// run the `list blobs` command
resticIDs := testRunList(t, env.gopts, "blobs")
// convert to set
testIDSet := restic.NewIDSet(resticIDs...)
blobSetFromIndex := testListBlobs(t, env.gopts)
rtest.Assert(t, blobSetFromIndex.Equals(testIDSet), "the set of restic.ID s should be equal")
}

View File

@@ -15,13 +15,16 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/pflag"
"github.com/restic/restic/internal/data"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/fs"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/walker"
)
func newLsCommand() *cobra.Command {
func newLsCommand(globalOptions *global.Options) *cobra.Command {
var opts LsOptions
cmd := &cobra.Command{
@@ -59,7 +62,8 @@ Exit status is 12 if the password is incorrect.
DisableAutoGenTag: true,
GroupID: cmdGroupDefault,
RunE: func(cmd *cobra.Command, args []string) error {
return runLs(cmd.Context(), opts, globalOptions, args)
finalizeSnapshotFilter(&opts.SnapshotFilter)
return runLs(cmd.Context(), opts, *globalOptions, args, globalOptions.Term)
},
}
opts.AddFlags(cmd.Flags())
@@ -69,7 +73,7 @@ Exit status is 12 if the password is incorrect.
// LsOptions collects all options for the ls command.
type LsOptions struct {
ListLong bool
restic.SnapshotFilter
data.SnapshotFilter
Recursive bool
HumanReadable bool
Ncdu bool
@@ -88,8 +92,8 @@ func (opts *LsOptions) AddFlags(f *pflag.FlagSet) {
}
type lsPrinter interface {
Snapshot(sn *restic.Snapshot) error
Node(path string, node *restic.Node, isPrefixDirectory bool) error
Snapshot(sn *data.Snapshot) error
Node(path string, node *data.Node, isPrefixDirectory bool) error
LeaveDir(path string) error
Close() error
}
@@ -98,9 +102,9 @@ type jsonLsPrinter struct {
enc *json.Encoder
}
func (p *jsonLsPrinter) Snapshot(sn *restic.Snapshot) error {
func (p *jsonLsPrinter) Snapshot(sn *data.Snapshot) error {
type lsSnapshot struct {
*restic.Snapshot
*data.Snapshot
ID *restic.ID `json:"id"`
ShortID string `json:"short_id"` // deprecated
MessageType string `json:"message_type"` // "snapshot"
@@ -117,14 +121,14 @@ func (p *jsonLsPrinter) Snapshot(sn *restic.Snapshot) error {
}
// Node formats node in our custom JSON format, followed by a newline.
func (p *jsonLsPrinter) Node(path string, node *restic.Node, isPrefixDirectory bool) error {
func (p *jsonLsPrinter) Node(path string, node *data.Node, isPrefixDirectory bool) error {
if isPrefixDirectory {
return nil
}
return lsNodeJSON(p.enc, path, node)
}
func lsNodeJSON(enc *json.Encoder, path string, node *restic.Node) error {
func lsNodeJSON(enc *json.Encoder, path string, node *data.Node) error {
n := &struct {
Name string `json:"name"`
Type string `json:"type"`
@@ -160,7 +164,7 @@ func lsNodeJSON(enc *json.Encoder, path string, node *restic.Node) error {
}
// Always print size for regular files, even when empty,
// but never for other types.
if node.Type == restic.NodeTypeFile {
if node.Type == data.NodeTypeFile {
n.Size = &n.size
}
@@ -178,7 +182,7 @@ type ncduLsPrinter struct {
// Snapshot prints a restic snapshot in Ncdu save format.
// It opens the JSON list. Nodes are added with lsNodeNcdu and the list is closed by lsCloseNcdu.
// Format documentation: https://dev.yorhel.nl/ncdu/jsonfmt
func (p *ncduLsPrinter) Snapshot(sn *restic.Snapshot) error {
func (p *ncduLsPrinter) Snapshot(sn *data.Snapshot) error {
const NcduMajorVer = 1
const NcduMinorVer = 2
@@ -191,7 +195,7 @@ func (p *ncduLsPrinter) Snapshot(sn *restic.Snapshot) error {
return err
}
func lsNcduNode(_ string, node *restic.Node) ([]byte, error) {
func lsNcduNode(_ string, node *data.Node) ([]byte, error) {
type NcduNode struct {
Name string `json:"name"`
Asize uint64 `json:"asize"`
@@ -216,7 +220,7 @@ func lsNcduNode(_ string, node *restic.Node) ([]byte, error) {
Dev: node.DeviceID,
Ino: node.Inode,
NLink: node.Links,
NotReg: node.Type != restic.NodeTypeDir && node.Type != restic.NodeTypeFile,
NotReg: node.Type != data.NodeTypeDir && node.Type != data.NodeTypeFile,
UID: node.UID,
GID: node.GID,
Mode: uint16(node.Mode & os.ModePerm),
@@ -240,13 +244,13 @@ func lsNcduNode(_ string, node *restic.Node) ([]byte, error) {
return json.Marshal(outNode)
}
func (p *ncduLsPrinter) Node(path string, node *restic.Node, _ bool) error {
func (p *ncduLsPrinter) Node(path string, node *data.Node, _ bool) error {
out, err := lsNcduNode(path, node)
if err != nil {
return err
}
if node.Type == restic.NodeTypeDir {
if node.Type == data.NodeTypeDir {
_, err = fmt.Fprintf(p.out, ",\n%s[\n%s%s", strings.Repeat(" ", p.depth), strings.Repeat(" ", p.depth+1), string(out))
p.depth++
} else {
@@ -270,15 +274,19 @@ type textLsPrinter struct {
dirs []string
ListLong bool
HumanReadable bool
termPrinter interface {
P(msg string, args ...interface{})
S(msg string, args ...interface{})
}
}
func (p *textLsPrinter) Snapshot(sn *restic.Snapshot) error {
Verbosef("%v filtered by %v:\n", sn, p.dirs)
func (p *textLsPrinter) Snapshot(sn *data.Snapshot) error {
p.termPrinter.P("%v filtered by %v:", sn, p.dirs)
return nil
}
func (p *textLsPrinter) Node(path string, node *restic.Node, isPrefixDirectory bool) error {
func (p *textLsPrinter) Node(path string, node *data.Node, isPrefixDirectory bool) error {
if !isPrefixDirectory {
Printf("%s\n", formatNode(path, node, p.ListLong, p.HumanReadable))
p.termPrinter.S("%s", formatNode(path, node, p.ListLong, p.HumanReadable))
}
return nil
}
@@ -293,10 +301,12 @@ func (p *textLsPrinter) Close() error {
// for ls -l output sorting
type toSortOutput struct {
nodepath string
node *restic.Node
node *data.Node
}
func runLs(ctx context.Context, opts LsOptions, gopts GlobalOptions, args []string) error {
func runLs(ctx context.Context, opts LsOptions, gopts global.Options, args []string, term ui.Terminal) error {
termPrinter := ui.NewProgressPrinter(gopts.JSON, gopts.Verbosity, term)
if len(args) == 0 {
return errors.Fatal("no snapshot ID specified, specify snapshot ID or use special ID 'latest'")
}
@@ -355,7 +365,7 @@ func runLs(ctx context.Context, opts LsOptions, gopts GlobalOptions, args []stri
return false
}
ctx, repo, unlock, err := openWithReadLock(ctx, gopts, gopts.NoLock)
ctx, repo, unlock, err := openWithReadLock(ctx, gopts, gopts.NoLock, termPrinter)
if err != nil {
return err
}
@@ -366,8 +376,7 @@ func runLs(ctx context.Context, opts LsOptions, gopts GlobalOptions, args []stri
return err
}
bar := newIndexProgress(gopts.Quiet, gopts.JSON)
if err = repo.LoadIndex(ctx, bar); err != nil {
if err = repo.LoadIndex(ctx, termPrinter); err != nil {
return err
}
@@ -375,17 +384,18 @@ func runLs(ctx context.Context, opts LsOptions, gopts GlobalOptions, args []stri
if gopts.JSON {
printer = &jsonLsPrinter{
enc: json.NewEncoder(globalOptions.stdout),
enc: json.NewEncoder(gopts.Term.OutputWriter()),
}
} else if opts.Ncdu {
printer = &ncduLsPrinter{
out: globalOptions.stdout,
out: gopts.Term.OutputWriter(),
}
} else {
printer = &textLsPrinter{
dirs: dirs,
ListLong: opts.ListLong,
HumanReadable: opts.HumanReadable,
termPrinter: termPrinter,
}
}
if opts.Sort != SortModeName || opts.Reverse {
@@ -396,16 +406,12 @@ func runLs(ctx context.Context, opts LsOptions, gopts GlobalOptions, args []stri
}
}
sn, subfolder, err := (&restic.SnapshotFilter{
Hosts: opts.Hosts,
Paths: opts.Paths,
Tags: opts.Tags,
}).FindLatest(ctx, snapshotLister, repo, args[0])
sn, subfolder, err := opts.SnapshotFilter.FindLatest(ctx, snapshotLister, repo, args[0])
if err != nil {
return err
}
sn.Tree, err = restic.FindTreeDirectory(ctx, repo, sn.Tree, subfolder)
sn.Tree, err = data.FindTreeDirectory(ctx, repo, sn.Tree, subfolder)
if err != nil {
return err
}
@@ -414,7 +420,7 @@ func runLs(ctx context.Context, opts LsOptions, gopts GlobalOptions, args []stri
return err
}
processNode := func(_ restic.ID, nodepath string, node *restic.Node, err error) error {
processNode := func(_ restic.ID, nodepath string, node *data.Node, err error) error {
if err != nil {
return err
}
@@ -449,7 +455,7 @@ func runLs(ctx context.Context, opts LsOptions, gopts GlobalOptions, args []stri
// otherwise, signal the walker to not walk recursively into any
// subdirs
if node.Type == restic.NodeTypeDir {
if node.Type == data.NodeTypeDir {
// immediately generate leaveDir if the directory is skipped
if printedDir {
if err := printer.LeaveDir(nodepath); err != nil {
@@ -486,10 +492,10 @@ type sortedPrinter struct {
reverse bool
}
func (p *sortedPrinter) Snapshot(sn *restic.Snapshot) error {
func (p *sortedPrinter) Snapshot(sn *data.Snapshot) error {
return p.printer.Snapshot(sn)
}
func (p *sortedPrinter) Node(path string, node *restic.Node, isPrefixDirectory bool) error {
func (p *sortedPrinter) Node(path string, node *data.Node, isPrefixDirectory bool) error {
if !isPrefixDirectory {
p.collector = append(p.collector, toSortOutput{path, node})
}

View File

@@ -8,20 +8,22 @@ import (
"strings"
"testing"
"github.com/restic/restic/internal/data"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/restic"
rtest "github.com/restic/restic/internal/test"
)
func testRunLsWithOpts(t testing.TB, gopts GlobalOptions, opts LsOptions, args []string) []byte {
buf, err := withCaptureStdout(func() error {
func testRunLsWithOpts(t testing.TB, gopts global.Options, opts LsOptions, args []string) []byte {
buf, err := withCaptureStdout(t, gopts, func(ctx context.Context, gopts global.Options) error {
gopts.Quiet = true
return runLs(context.TODO(), opts, gopts, args)
return runLs(context.TODO(), opts, gopts, args, gopts.Term)
})
rtest.OK(t, err)
return buf.Bytes()
}
func testRunLs(t testing.TB, gopts GlobalOptions, snapshotID string) []string {
func testRunLs(t testing.TB, gopts global.Options, snapshotID string) []string {
out := testRunLsWithOpts(t, gopts, LsOptions{}, []string{snapshotID})
return strings.Split(string(out), "\n")
}
@@ -129,7 +131,7 @@ func TestRunLsJson(t *testing.T) {
// partial copy of snapshot structure from cmd_ls
type lsSnapshot struct {
*restic.Snapshot
*data.Snapshot
ID *restic.ID `json:"id"`
ShortID string `json:"short_id"` // deprecated
MessageType string `json:"message_type"` // "snapshot"

View File

@@ -7,13 +7,13 @@ import (
"testing"
"time"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/data"
rtest "github.com/restic/restic/internal/test"
)
type lsTestNode struct {
path string
restic.Node
data.Node
}
var lsTestNodes = []lsTestNode{
@@ -21,9 +21,9 @@ var lsTestNodes = []lsTestNode{
// Permissions, by convention is "-" per mode bit
{
path: "/bar/baz",
Node: restic.Node{
Node: data.Node{
Name: "baz",
Type: restic.NodeTypeFile,
Type: data.NodeTypeFile,
Size: 12345,
UID: 10000000,
GID: 20000000,
@@ -37,9 +37,9 @@ var lsTestNodes = []lsTestNode{
// Even empty files get an explicit size.
{
path: "/foo/empty",
Node: restic.Node{
Node: data.Node{
Name: "empty",
Type: restic.NodeTypeFile,
Type: data.NodeTypeFile,
Size: 0,
UID: 1001,
GID: 1001,
@@ -54,9 +54,9 @@ var lsTestNodes = []lsTestNode{
// Mode is printed in decimal, including the type bits.
{
path: "/foo/link",
Node: restic.Node{
Node: data.Node{
Name: "link",
Type: restic.NodeTypeSymlink,
Type: data.NodeTypeSymlink,
Mode: os.ModeSymlink | 0777,
LinkTarget: "not printed",
},
@@ -64,9 +64,9 @@ var lsTestNodes = []lsTestNode{
{
path: "/some/directory",
Node: restic.Node{
Node: data.Node{
Name: "directory",
Type: restic.NodeTypeDir,
Type: data.NodeTypeDir,
Mode: os.ModeDir | 0755,
ModTime: time.Date(2020, 1, 2, 3, 4, 5, 0, time.UTC),
AccessTime: time.Date(2021, 2, 3, 4, 5, 6, 7, time.UTC),
@@ -77,9 +77,9 @@ var lsTestNodes = []lsTestNode{
// Test encoding of setuid/setgid/sticky bit
{
path: "/some/sticky",
Node: restic.Node{
Node: data.Node{
Name: "sticky",
Type: restic.NodeTypeDir,
Type: data.NodeTypeDir,
Mode: os.ModeDir | 0755 | os.ModeSetuid | os.ModeSetgid | os.ModeSticky,
},
},
@@ -134,24 +134,24 @@ func TestLsNcdu(t *testing.T) {
}
modTime := time.Date(2020, 1, 2, 3, 4, 5, 0, time.UTC)
rtest.OK(t, printer.Snapshot(&restic.Snapshot{
rtest.OK(t, printer.Snapshot(&data.Snapshot{
Hostname: "host",
Paths: []string{"/example"},
}))
rtest.OK(t, printer.Node("/directory", &restic.Node{
Type: restic.NodeTypeDir,
rtest.OK(t, printer.Node("/directory", &data.Node{
Type: data.NodeTypeDir,
Name: "directory",
ModTime: modTime,
}, false))
rtest.OK(t, printer.Node("/directory/data", &restic.Node{
Type: restic.NodeTypeFile,
rtest.OK(t, printer.Node("/directory/data", &data.Node{
Type: data.NodeTypeFile,
Name: "data",
Size: 42,
ModTime: modTime,
}, false))
rtest.OK(t, printer.LeaveDir("/directory"))
rtest.OK(t, printer.Node("/file", &restic.Node{
Type: restic.NodeTypeFile,
rtest.OK(t, printer.Node("/file", &data.Node{
Type: data.NodeTypeFile,
Name: "file",
Size: 12345,
ModTime: modTime,

View File

@@ -3,16 +3,17 @@ package main
import (
"context"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/migrations"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/ui/progress"
"github.com/restic/restic/internal/ui/termstatus"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
func newMigrateCommand() *cobra.Command {
func newMigrateCommand(globalOptions *global.Options) *cobra.Command {
var opts MigrateOptions
cmd := &cobra.Command{
@@ -35,9 +36,7 @@ Exit status is 12 if the password is incorrect.
DisableAutoGenTag: true,
GroupID: cmdGroupDefault,
RunE: func(cmd *cobra.Command, args []string) error {
term, cancel := setupTermstatus()
defer cancel()
return runMigrate(cmd.Context(), opts, globalOptions, args, term)
return runMigrate(cmd.Context(), opts, *globalOptions, args, globalOptions.Term)
},
}
@@ -77,7 +76,7 @@ func checkMigrations(ctx context.Context, repo restic.Repository, printer progre
return nil
}
func applyMigrations(ctx context.Context, opts MigrateOptions, gopts GlobalOptions, repo restic.Repository, args []string, term *termstatus.Terminal, printer progress.Printer) error {
func applyMigrations(ctx context.Context, opts MigrateOptions, gopts global.Options, repo restic.Repository, args []string, term ui.Terminal, printer progress.Printer) error {
var firsterr error
for _, name := range args {
found := false
@@ -135,10 +134,10 @@ func applyMigrations(ctx context.Context, opts MigrateOptions, gopts GlobalOptio
return firsterr
}
func runMigrate(ctx context.Context, opts MigrateOptions, gopts GlobalOptions, args []string, term *termstatus.Terminal) error {
printer := newTerminalProgressPrinter(gopts.verbosity, term)
func runMigrate(ctx context.Context, opts MigrateOptions, gopts global.Options, args []string, term ui.Terminal) error {
printer := ui.NewProgressPrinter(false, gopts.Verbosity, term)
ctx, repo, unlock, err := openWithExclusiveLock(ctx, gopts, false)
ctx, repo, unlock, err := openWithExclusiveLock(ctx, gopts, false, printer)
if err != nil {
return err
}

View File

@@ -1,5 +1,4 @@
//go:build darwin || freebsd || linux
// +build darwin freebsd linux
package main
@@ -12,10 +11,13 @@ import (
"github.com/spf13/cobra"
"github.com/spf13/pflag"
"golang.org/x/sys/unix"
"github.com/restic/restic/internal/data"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/fuse"
@@ -23,19 +25,19 @@ import (
"github.com/anacrolix/fuse/fs"
)
func registerMountCommand(cmdRoot *cobra.Command) {
cmdRoot.AddCommand(newMountCommand())
func registerMountCommand(cmdRoot *cobra.Command, globalOptions *global.Options) {
cmdRoot.AddCommand(newMountCommand(globalOptions))
}
func newMountCommand() *cobra.Command {
func newMountCommand(globalOptions *global.Options) *cobra.Command {
var opts MountOptions
cmd := &cobra.Command{
Use: "mount [flags] mountpoint",
Short: "Mount the repository",
Long: `
The "mount" command mounts the repository via fuse to a directory. This is a
read-only mount.
The "mount" command mounts the repository via fuse over a writeable directory.
The repository will be mounted read-only.
Snapshot Directories
====================
@@ -81,7 +83,8 @@ Exit status is 12 if the password is incorrect.
DisableAutoGenTag: true,
GroupID: cmdGroupDefault,
RunE: func(cmd *cobra.Command, args []string) error {
return runMount(cmd.Context(), opts, globalOptions, args)
finalizeSnapshotFilter(&opts.SnapshotFilter)
return runMount(cmd.Context(), opts, *globalOptions, args, globalOptions.Term)
},
}
@@ -94,7 +97,7 @@ type MountOptions struct {
OwnerRoot bool
AllowOther bool
NoDefaultPermissions bool
restic.SnapshotFilter
data.SnapshotFilter
TimeTemplate string
PathTemplates []string
}
@@ -112,7 +115,9 @@ func (opts *MountOptions) AddFlags(f *pflag.FlagSet) {
_ = f.MarkDeprecated("snapshot-template", "use --time-template")
}
func runMount(ctx context.Context, opts MountOptions, gopts GlobalOptions, args []string) error {
func runMount(ctx context.Context, opts MountOptions, gopts global.Options, args []string, term ui.Terminal) error {
printer := ui.NewProgressPrinter(false, gopts.Verbosity, term)
if opts.TimeTemplate == "" {
return errors.Fatal("time template string cannot be empty")
}
@@ -129,22 +134,31 @@ func runMount(ctx context.Context, opts MountOptions, gopts GlobalOptions, args
// Check the existence of the mount point at the earliest stage to
// prevent unnecessary computations while opening the repository.
if _, err := os.Stat(mountpoint); errors.Is(err, os.ErrNotExist) {
Verbosef("Mountpoint %s doesn't exist\n", mountpoint)
return err
stat, err := os.Stat(mountpoint)
if errors.Is(err, os.ErrNotExist) {
printer.P("Mountpoint %s doesn't exist", mountpoint)
return errors.Fatal("invalid mountpoint")
} else if !stat.IsDir() {
printer.P("Mountpoint %s is not a directory", mountpoint)
return errors.Fatal("invalid mountpoint")
}
err = unix.Access(mountpoint, unix.W_OK|unix.X_OK)
if err != nil {
printer.P("Mountpoint %s is not writeable or not excutable", mountpoint)
return errors.Fatal("inaccessible mountpoint")
}
debug.Log("start mount")
defer debug.Log("finish mount")
ctx, repo, unlock, err := openWithReadLock(ctx, gopts, gopts.NoLock)
ctx, repo, unlock, err := openWithReadLock(ctx, gopts, gopts.NoLock, printer)
if err != nil {
return err
}
defer unlock()
bar := newIndexProgress(gopts.Quiet, gopts.JSON)
err = repo.LoadIndex(ctx, bar)
err = repo.LoadIndex(ctx, printer)
if err != nil {
return err
}
@@ -183,9 +197,9 @@ func runMount(ctx context.Context, opts MountOptions, gopts GlobalOptions, args
}
root := fuse.NewRoot(repo, cfg)
Printf("Now serving the repository at %s\n", mountpoint)
Printf("Use another terminal or tool to browse the contents of this folder.\n")
Printf("When finished, quit with Ctrl-c here or umount the mountpoint.\n")
printer.S("Now serving the repository at %s", mountpoint)
printer.S("Use another terminal or tool to browse the contents of this folder.")
printer.S("When finished, quit with Ctrl-c here or umount the mountpoint.")
debug.Log("serving mount at %v", mountpoint)
@@ -201,7 +215,7 @@ func runMount(ctx context.Context, opts MountOptions, gopts GlobalOptions, args
debug.Log("running umount cleanup handler for mount at %v", mountpoint)
err := systemFuse.Unmount(mountpoint)
if err != nil {
Warnf("unable to umount (maybe already umounted or still in use?): %v\n", err)
printer.E("unable to umount (maybe already umounted or still in use?): %v", err)
}
return ErrOK

View File

@@ -1,10 +1,12 @@
//go:build !darwin && !freebsd && !linux
// +build !darwin,!freebsd,!linux
package main
import "github.com/spf13/cobra"
import (
"github.com/restic/restic/internal/global"
"github.com/spf13/cobra"
)
func registerMountCommand(_ *cobra.Command) {
func registerMountCommand(_ *cobra.Command, _ *global.Options) {
// Mount command not supported on these platforms
}

View File

@@ -1,5 +1,4 @@
//go:build darwin || freebsd || linux
// +build darwin freebsd linux
package main
@@ -13,9 +12,12 @@ import (
"time"
systemFuse "github.com/anacrolix/fuse"
"github.com/restic/restic/internal/data"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/restic"
rtest "github.com/restic/restic/internal/test"
"github.com/restic/restic/internal/ui"
)
const (
@@ -56,12 +58,14 @@ func waitForMount(t testing.TB, dir string) {
t.Errorf("subdir %q of dir %s never appeared", mountTestSubdir, dir)
}
func testRunMount(t testing.TB, gopts GlobalOptions, dir string, wg *sync.WaitGroup) {
func testRunMount(t testing.TB, gopts global.Options, dir string, wg *sync.WaitGroup) {
defer wg.Done()
opts := MountOptions{
TimeTemplate: time.RFC3339,
}
rtest.OK(t, runMount(context.TODO(), opts, gopts, []string{dir}))
rtest.OK(t, withTermStatus(t, gopts, func(ctx context.Context, gopts global.Options) error {
return runMount(context.TODO(), opts, gopts, []string{dir}, gopts.Term)
}))
}
func testRunUmount(t testing.TB, dir string) {
@@ -87,7 +91,7 @@ func listSnapshots(t testing.TB, dir string) []string {
return names
}
func checkSnapshots(t testing.TB, gopts GlobalOptions, mountpoint string, snapshotIDs restic.IDs, expectedSnapshotsInFuseDir int) {
func checkSnapshots(t testing.TB, gopts global.Options, mountpoint string, snapshotIDs restic.IDs, expectedSnapshotsInFuseDir int) {
t.Logf("checking for %d snapshots: %v", len(snapshotIDs), snapshotIDs)
var wg sync.WaitGroup
@@ -125,34 +129,41 @@ func checkSnapshots(t testing.TB, gopts GlobalOptions, mountpoint string, snapsh
}
}
_, repo, unlock, err := openWithReadLock(context.TODO(), gopts, false)
rtest.OK(t, err)
defer unlock()
for _, id := range snapshotIDs {
snapshot, err := restic.LoadSnapshot(context.TODO(), repo, id)
rtest.OK(t, err)
ts := snapshot.Time.Format(time.RFC3339)
present, ok := namesMap[ts]
if !ok {
t.Errorf("Snapshot %v (%q) isn't present in fuse dir", id.Str(), ts)
err := withTermStatus(t, gopts, func(ctx context.Context, gopts global.Options) error {
printer := ui.NewProgressPrinter(gopts.JSON, gopts.Verbosity, gopts.Term)
_, repo, unlock, err := openWithReadLock(ctx, gopts, false, printer)
if err != nil {
return err
}
defer unlock()
for i := 1; present; i++ {
ts = fmt.Sprintf("%s-%d", snapshot.Time.Format(time.RFC3339), i)
present, ok = namesMap[ts]
for _, id := range snapshotIDs {
snapshot, err := data.LoadSnapshot(ctx, repo, id)
rtest.OK(t, err)
ts := snapshot.Time.Format(time.RFC3339)
present, ok := namesMap[ts]
if !ok {
t.Errorf("Snapshot %v (%q) isn't present in fuse dir", id.Str(), ts)
}
if !present {
break
}
}
for i := 1; present; i++ {
ts = fmt.Sprintf("%s-%d", snapshot.Time.Format(time.RFC3339), i)
present, ok = namesMap[ts]
if !ok {
t.Errorf("Snapshot %v (%q) isn't present in fuse dir", id.Str(), ts)
}
namesMap[ts] = true
}
if !present {
break
}
}
namesMap[ts] = true
}
return nil
})
rtest.OK(t, err)
for name, present := range namesMap {
rtest.Assert(t, present, "Directory %s is present in fuse dir but is not a snapshot", name)
@@ -166,7 +177,7 @@ func TestMount(t *testing.T) {
env, cleanup := withTestEnvironment(t)
// must list snapshots more than once
env.gopts.backendTestHook = nil
env.gopts.BackendTestHook = nil
defer cleanup()
testRunInit(t, env.gopts)
@@ -177,7 +188,7 @@ func TestMount(t *testing.T) {
// first backup
testRunBackup(t, "", []string{env.testdata}, BackupOptions{}, env.gopts)
snapshotIDs := testRunList(t, "snapshots", env.gopts)
snapshotIDs := testRunList(t, env.gopts, "snapshots")
rtest.Assert(t, len(snapshotIDs) == 1,
"expected one snapshot, got %v", snapshotIDs)
@@ -185,7 +196,7 @@ func TestMount(t *testing.T) {
// second backup, implicit incremental
testRunBackup(t, "", []string{env.testdata}, BackupOptions{}, env.gopts)
snapshotIDs = testRunList(t, "snapshots", env.gopts)
snapshotIDs = testRunList(t, env.gopts, "snapshots")
rtest.Assert(t, len(snapshotIDs) == 2,
"expected two snapshots, got %v", snapshotIDs)
@@ -194,7 +205,7 @@ func TestMount(t *testing.T) {
// third backup, explicit incremental
bopts := BackupOptions{Parent: snapshotIDs[0].String()}
testRunBackup(t, "", []string{env.testdata}, bopts, env.gopts)
snapshotIDs = testRunList(t, "snapshots", env.gopts)
snapshotIDs = testRunList(t, env.gopts, "snapshots")
rtest.Assert(t, len(snapshotIDs) == 3,
"expected three snapshots, got %v", snapshotIDs)
@@ -213,7 +224,7 @@ func TestMountSameTimestamps(t *testing.T) {
env, cleanup := withTestEnvironment(t)
// must list snapshots more than once
env.gopts.backendTestHook = nil
env.gopts.BackendTestHook = nil
defer cleanup()
rtest.SetupTarTestFixture(t, env.base, filepath.Join("testdata", "repo-same-timestamps.tar.gz"))

View File

@@ -3,12 +3,13 @@ package main
import (
"fmt"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/options"
"github.com/spf13/cobra"
)
func newOptionsCommand() *cobra.Command {
func newOptionsCommand(globalOptions *global.Options) *cobra.Command {
cmd := &cobra.Command{
Use: "options",
Short: "Print list of extended options",
@@ -24,7 +25,7 @@ Exit status is 1 if there was any error.
GroupID: cmdGroupAdvanced,
DisableAutoGenTag: true,
Run: func(_ *cobra.Command, _ []string) {
fmt.Printf("All Extended Options:\n")
globalOptions.Term.Print("All Extended Options:")
var maxLen int
for _, opt := range options.List() {
if l := len(opt.Namespace + "." + opt.Name); l > maxLen {
@@ -32,7 +33,7 @@ Exit status is 1 if there was any error.
}
}
for _, opt := range options.List() {
fmt.Printf(" %*s %s\n", -maxLen, opt.Namespace+"."+opt.Name, opt.Text)
globalOptions.Term.Print(fmt.Sprintf(" %*s %s", -maxLen, opt.Namespace+"."+opt.Name, opt.Text))
}
},
}

View File

@@ -7,19 +7,20 @@ import (
"strconv"
"strings"
"github.com/restic/restic/internal/data"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/ui/progress"
"github.com/restic/restic/internal/ui/termstatus"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
func newPruneCommand() *cobra.Command {
func newPruneCommand(globalOptions *global.Options) *cobra.Command {
var opts PruneOptions
cmd := &cobra.Command{
@@ -41,9 +42,7 @@ Exit status is 12 if the password is incorrect.
GroupID: cmdGroupDefault,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, _ []string) error {
term, cancel := setupTermstatus()
defer cancel()
return runPrune(cmd.Context(), opts, globalOptions, term)
return runPrune(cmd.Context(), opts, *globalOptions, globalOptions.Term)
},
}
@@ -155,7 +154,7 @@ func verifyPruneOptions(opts *PruneOptions) error {
return nil
}
func runPrune(ctx context.Context, opts PruneOptions, gopts GlobalOptions, term *termstatus.Terminal) error {
func runPrune(ctx context.Context, opts PruneOptions, gopts global.Options, term ui.Terminal) error {
err := verifyPruneOptions(&opts)
if err != nil {
return err
@@ -169,7 +168,8 @@ func runPrune(ctx context.Context, opts PruneOptions, gopts GlobalOptions, term
return errors.Fatal("--no-lock is only applicable in combination with --dry-run for prune command")
}
ctx, repo, unlock, err := openWithExclusiveLock(ctx, gopts, opts.DryRun && gopts.NoLock)
printer := ui.NewProgressPrinter(false, gopts.Verbosity, term)
ctx, repo, unlock, err := openWithExclusiveLock(ctx, gopts, opts.DryRun && gopts.NoLock, printer)
if err != nil {
return err
}
@@ -183,20 +183,16 @@ func runPrune(ctx context.Context, opts PruneOptions, gopts GlobalOptions, term
opts.unsafeRecovery = true
}
return runPruneWithRepo(ctx, opts, gopts, repo, restic.NewIDSet(), term)
return runPruneWithRepo(ctx, opts, repo, restic.NewIDSet(), printer)
}
func runPruneWithRepo(ctx context.Context, opts PruneOptions, gopts GlobalOptions, repo *repository.Repository, ignoreSnapshots restic.IDSet, term *termstatus.Terminal) error {
func runPruneWithRepo(ctx context.Context, opts PruneOptions, repo *repository.Repository, ignoreSnapshots restic.IDSet, printer progress.Printer) error {
if repo.Cache() == nil {
Print("warning: running prune without a cache, this may be very slow!\n")
printer.S("warning: running prune without a cache, this may be very slow!")
}
printer := newTerminalProgressPrinter(gopts.verbosity, term)
printer.P("loading indexes...\n")
// loading the index before the snapshots is ok, as we use an exclusive lock here
bar := newIndexTerminalProgress(gopts.Quiet, gopts.JSON, term)
err := repo.LoadIndex(ctx, bar)
err := repo.LoadIndex(ctx, printer)
if err != nil {
return err
}
@@ -284,8 +280,8 @@ func printPruneStats(printer progress.Printer, stats repository.PruneStats) erro
func getUsedBlobs(ctx context.Context, repo restic.Repository, usedBlobs restic.FindBlobSet, ignoreSnapshots restic.IDSet, printer progress.Printer) error {
var snapshotTrees restic.IDs
printer.P("loading all snapshots...\n")
err := restic.ForAllSnapshots(ctx, repo, repo, ignoreSnapshots,
func(id restic.ID, sn *restic.Snapshot, err error) error {
err := data.ForAllSnapshots(ctx, repo, repo, ignoreSnapshots,
func(id restic.ID, sn *data.Snapshot, err error) error {
if err != nil {
debug.Log("failed to load snapshot %v (error %v)", id, err)
return err
@@ -304,5 +300,10 @@ func getUsedBlobs(ctx context.Context, repo restic.Repository, usedBlobs restic.
bar.SetMax(uint64(len(snapshotTrees)))
defer bar.Done()
return restic.FindUsedBlobs(ctx, repo, snapshotTrees, usedBlobs, bar)
err = data.FindUsedBlobs(ctx, repo, snapshotTrees, usedBlobs, bar)
if err != nil {
return errors.Fatalf("failed finding blobs: %v", err)
}
return nil
}

View File

@@ -7,30 +7,30 @@ import (
"testing"
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/repository"
rtest "github.com/restic/restic/internal/test"
"github.com/restic/restic/internal/ui/termstatus"
)
func testRunPrune(t testing.TB, gopts GlobalOptions, opts PruneOptions) {
func testRunPrune(t testing.TB, gopts global.Options, opts PruneOptions) {
t.Helper()
rtest.OK(t, testRunPruneOutput(gopts, opts))
rtest.OK(t, testRunPruneOutput(t, gopts, opts))
}
func testRunPruneMustFail(t testing.TB, gopts GlobalOptions, opts PruneOptions) {
func testRunPruneMustFail(t testing.TB, gopts global.Options, opts PruneOptions) {
t.Helper()
err := testRunPruneOutput(gopts, opts)
err := testRunPruneOutput(t, gopts, opts)
rtest.Assert(t, err != nil, "expected non nil error")
}
func testRunPruneOutput(gopts GlobalOptions, opts PruneOptions) error {
oldHook := gopts.backendTestHook
gopts.backendTestHook = func(r backend.Backend) (backend.Backend, error) { return newListOnceBackend(r), nil }
func testRunPruneOutput(t testing.TB, gopts global.Options, opts PruneOptions) error {
oldHook := gopts.BackendTestHook
gopts.BackendTestHook = func(r backend.Backend) (backend.Backend, error) { return newListOnceBackend(r), nil }
defer func() {
gopts.backendTestHook = oldHook
gopts.BackendTestHook = oldHook
}()
return withTermStatus(gopts, func(ctx context.Context, term *termstatus.Terminal) error {
return runPrune(context.TODO(), opts, gopts, term)
return withTermStatus(t, gopts, func(ctx context.Context, gopts global.Options) error {
return runPrune(context.TODO(), opts, gopts, gopts.Term)
})
}
@@ -89,8 +89,8 @@ func createPrunableRepo(t *testing.T, env *testEnvironment) {
testRunForget(t, env.gopts, ForgetOptions{}, firstSnapshot.String())
}
func testRunForgetJSON(t testing.TB, gopts GlobalOptions, args ...string) {
buf, err := withCaptureStdout(func() error {
func testRunForgetJSON(t testing.TB, gopts global.Options, args ...string) {
buf, err := withCaptureStdout(t, gopts, func(ctx context.Context, gopts global.Options) error {
gopts.JSON = true
opts := ForgetOptions{
DryRun: true,
@@ -99,9 +99,7 @@ func testRunForgetJSON(t testing.TB, gopts GlobalOptions, args ...string) {
pruneOpts := PruneOptions{
MaxUnused: "5%",
}
return withTermStatus(gopts, func(ctx context.Context, term *termstatus.Terminal) error {
return runForget(context.TODO(), opts, pruneOpts, gopts, term, args)
})
return runForget(context.TODO(), opts, pruneOpts, gopts, gopts.Term, args)
})
rtest.OK(t, err)
@@ -122,8 +120,8 @@ func testPrune(t *testing.T, pruneOpts PruneOptions, checkOpts CheckOptions) {
createPrunableRepo(t, env)
testRunPrune(t, env.gopts, pruneOpts)
rtest.OK(t, withTermStatus(env.gopts, func(ctx context.Context, term *termstatus.Terminal) error {
_, err := runCheck(context.TODO(), checkOpts, env.gopts, nil, term)
rtest.OK(t, withTermStatus(t, env.gopts, func(ctx context.Context, gopts global.Options) error {
_, err := runCheck(context.TODO(), checkOpts, gopts, nil, gopts.Term)
return err
}))
}
@@ -152,14 +150,14 @@ func TestPruneWithDamagedRepository(t *testing.T) {
testListSnapshots(t, env.gopts, 1)
removePacksExcept(env.gopts, t, oldPacks, false)
oldHook := env.gopts.backendTestHook
env.gopts.backendTestHook = func(r backend.Backend) (backend.Backend, error) { return newListOnceBackend(r), nil }
oldHook := env.gopts.BackendTestHook
env.gopts.BackendTestHook = func(r backend.Backend) (backend.Backend, error) { return newListOnceBackend(r), nil }
defer func() {
env.gopts.backendTestHook = oldHook
env.gopts.BackendTestHook = oldHook
}()
// prune should fail
rtest.Equals(t, repository.ErrPacksMissing, withTermStatus(env.gopts, func(ctx context.Context, term *termstatus.Terminal) error {
return runPrune(context.TODO(), pruneDefaultOptions, env.gopts, term)
rtest.Equals(t, repository.ErrPacksMissing, withTermStatus(t, env.gopts, func(ctx context.Context, gopts global.Options) error {
return runPrune(context.TODO(), pruneDefaultOptions, gopts, gopts.Term)
}), "prune should have reported index not complete error")
}
@@ -231,8 +229,8 @@ func testEdgeCaseRepo(t *testing.T, tarfile string, optionsCheck CheckOptions, o
if checkOK {
testRunCheck(t, env.gopts)
} else {
rtest.Assert(t, withTermStatus(env.gopts, func(ctx context.Context, term *termstatus.Terminal) error {
_, err := runCheck(context.TODO(), optionsCheck, env.gopts, nil, term)
rtest.Assert(t, withTermStatus(t, env.gopts, func(ctx context.Context, gopts global.Options) error {
_, err := runCheck(context.TODO(), optionsCheck, gopts, nil, gopts.Term)
return err
}) != nil,
"check should have reported an error")
@@ -242,8 +240,8 @@ func testEdgeCaseRepo(t *testing.T, tarfile string, optionsCheck CheckOptions, o
testRunPrune(t, env.gopts, optionsPrune)
testRunCheck(t, env.gopts)
} else {
rtest.Assert(t, withTermStatus(env.gopts, func(ctx context.Context, term *termstatus.Terminal) error {
return runPrune(context.TODO(), optionsPrune, env.gopts, term)
rtest.Assert(t, withTermStatus(t, env.gopts, func(ctx context.Context, gopts global.Options) error {
return runPrune(context.TODO(), optionsPrune, gopts, gopts.Term)
}) != nil,
"prune should have reported an error")
}

View File

@@ -5,16 +5,17 @@ import (
"os"
"time"
"github.com/restic/restic/internal/data"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/ui/progress"
"github.com/restic/restic/internal/ui/termstatus"
"github.com/spf13/cobra"
"golang.org/x/sync/errgroup"
)
func newRecoverCommand() *cobra.Command {
func newRecoverCommand(globalOptions *global.Options) *cobra.Command {
cmd := &cobra.Command{
Use: "recover [flags]",
Short: "Recover data from the repository not referenced by snapshots",
@@ -35,28 +36,25 @@ Exit status is 12 if the password is incorrect.
GroupID: cmdGroupDefault,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, _ []string) error {
term, cancel := setupTermstatus()
defer cancel()
return runRecover(cmd.Context(), globalOptions, term)
return runRecover(cmd.Context(), *globalOptions, globalOptions.Term)
},
}
return cmd
}
func runRecover(ctx context.Context, gopts GlobalOptions, term *termstatus.Terminal) error {
func runRecover(ctx context.Context, gopts global.Options, term ui.Terminal) error {
hostname, err := os.Hostname()
if err != nil {
return err
}
ctx, repo, unlock, err := openWithExclusiveLock(ctx, gopts, false)
printer := ui.NewProgressPrinter(false, gopts.Verbosity, term)
ctx, repo, unlock, err := openWithExclusiveLock(ctx, gopts, false, printer)
if err != nil {
return err
}
defer unlock()
printer := newTerminalProgressPrinter(gopts.verbosity, term)
snapshotLister, err := restic.MemorizeList(ctx, repo, restic.SnapshotFile)
if err != nil {
return err
@@ -69,8 +67,7 @@ func runRecover(ctx context.Context, gopts GlobalOptions, term *termstatus.Termi
}
printer.P("load index files\n")
bar := newIndexTerminalProgress(gopts.Quiet, gopts.JSON, term)
if err = repo.LoadIndex(ctx, bar); err != nil {
if err = repo.LoadIndex(ctx, printer); err != nil {
return err
}
@@ -88,9 +85,10 @@ func runRecover(ctx context.Context, gopts GlobalOptions, term *termstatus.Termi
}
printer.P("load %d trees\n", len(trees))
bar = newTerminalProgressMax(!gopts.Quiet, uint64(len(trees)), "trees loaded", term)
bar := printer.NewCounter("trees loaded")
bar.SetMax(uint64(len(trees)))
for id := range trees {
tree, err := restic.LoadTree(ctx, repo, id)
tree, err := data.LoadTree(ctx, repo, id)
if ctx.Err() != nil {
return ctx.Err()
}
@@ -99,8 +97,12 @@ func runRecover(ctx context.Context, gopts GlobalOptions, term *termstatus.Termi
continue
}
for _, node := range tree.Nodes {
if node.Type == restic.NodeTypeDir && node.Subtree != nil {
for item := range tree {
if item.Error != nil {
return item.Error
}
node := item.Node
if node.Type == data.NodeTypeDir && node.Subtree != nil {
trees[*node.Subtree] = true
}
}
@@ -109,7 +111,7 @@ func runRecover(ctx context.Context, gopts GlobalOptions, term *termstatus.Termi
bar.Done()
printer.P("load snapshots\n")
err = restic.ForAllSnapshots(ctx, snapshotLister, repo, nil, func(_ restic.ID, sn *restic.Snapshot, _ error) error {
err = data.ForAllSnapshots(ctx, snapshotLister, repo, nil, func(_ restic.ID, sn *data.Snapshot, _ error) error {
trees[*sn.Tree] = true
return nil
})
@@ -136,42 +138,33 @@ func runRecover(ctx context.Context, gopts GlobalOptions, term *termstatus.Termi
return ctx.Err()
}
tree := restic.NewTree(len(roots))
for id := range roots {
var subtreeID = id
node := restic.Node{
Type: restic.NodeTypeDir,
Name: id.Str(),
Mode: 0755,
Subtree: &subtreeID,
AccessTime: time.Now(),
ModTime: time.Now(),
ChangeTime: time.Now(),
}
err := tree.Insert(&node)
if err != nil {
return err
}
}
wg, wgCtx := errgroup.WithContext(ctx)
repo.StartPackUploader(wgCtx, wg)
var treeID restic.ID
wg.Go(func() error {
err = repo.WithBlobUploader(ctx, func(ctx context.Context, uploader restic.BlobSaverWithAsync) error {
var err error
treeID, err = restic.SaveTree(wgCtx, repo, tree)
tw := data.NewTreeWriter(uploader)
for id := range roots {
var subtreeID = id
node := data.Node{
Type: data.NodeTypeDir,
Name: id.Str(),
Mode: 0755,
Subtree: &subtreeID,
AccessTime: time.Now(),
ModTime: time.Now(),
ChangeTime: time.Now(),
}
err := tw.AddNode(&node)
if err != nil {
return err
}
}
treeID, err = tw.Finalize(ctx)
if err != nil {
return errors.Fatalf("unable to save new tree to the repository: %v", err)
}
err = repo.Flush(wgCtx)
if err != nil {
return errors.Fatalf("unable to save blobs to the repository: %v", err)
}
return nil
})
err = wg.Wait()
if err != nil {
return err
}
@@ -181,14 +174,14 @@ func runRecover(ctx context.Context, gopts GlobalOptions, term *termstatus.Termi
}
func createSnapshot(ctx context.Context, printer progress.Printer, name, hostname string, tags []string, repo restic.SaverUnpacked[restic.WriteableFileType], tree *restic.ID) error {
sn, err := restic.NewSnapshot([]string{name}, tags, hostname, time.Now())
sn, err := data.NewSnapshot([]string{name}, tags, hostname, time.Now())
if err != nil {
return errors.Fatalf("unable to save snapshot: %v", err)
}
sn.Tree = tree
id, err := restic.SaveSnapshot(ctx, repo, sn)
id, err := data.SaveSnapshot(ctx, repo, sn)
if err != nil {
return errors.Fatalf("unable to save snapshot: %v", err)
}

View File

@@ -4,20 +4,20 @@ import (
"context"
"testing"
"github.com/restic/restic/internal/global"
rtest "github.com/restic/restic/internal/test"
"github.com/restic/restic/internal/ui/termstatus"
)
func testRunRecover(t testing.TB, gopts GlobalOptions) {
rtest.OK(t, withTermStatus(gopts, func(ctx context.Context, term *termstatus.Terminal) error {
return runRecover(context.TODO(), gopts, term)
func testRunRecover(t testing.TB, gopts global.Options) {
rtest.OK(t, withTermStatus(t, gopts, func(ctx context.Context, gopts global.Options) error {
return runRecover(context.TODO(), gopts, gopts.Term)
}))
}
func TestRecover(t *testing.T) {
env, cleanup := withTestEnvironment(t)
// must list index more than once
env.gopts.backendTestHook = nil
env.gopts.BackendTestHook = nil
defer cleanup()
testSetupBackupData(t, env)
@@ -33,5 +33,7 @@ func TestRecover(t *testing.T) {
ids = testListSnapshots(t, env.gopts, 1)
testRunCheck(t, env.gopts)
// check that the root tree is included in the snapshot
rtest.OK(t, runCat(context.TODO(), env.gopts, []string{"tree", ids[0].String() + ":" + sn.Tree.Str()}))
rtest.OK(t, withTermStatus(t, env.gopts, func(ctx context.Context, gopts global.Options) error {
return runCat(context.TODO(), gopts, []string{"tree", ids[0].String() + ":" + sn.Tree.Str()}, gopts.Term)
}))
}

View File

@@ -1,10 +1,11 @@
package main
import (
"github.com/restic/restic/internal/global"
"github.com/spf13/cobra"
)
func newRepairCommand() *cobra.Command {
func newRepairCommand(globalOptions *global.Options) *cobra.Command {
cmd := &cobra.Command{
Use: "repair",
Short: "Repair the repository",
@@ -13,9 +14,9 @@ func newRepairCommand() *cobra.Command {
}
cmd.AddCommand(
newRepairIndexCommand(),
newRepairPacksCommand(),
newRepairSnapshotsCommand(),
newRepairIndexCommand(globalOptions),
newRepairPacksCommand(globalOptions),
newRepairSnapshotsCommand(globalOptions),
)
return cmd
}

View File

@@ -3,13 +3,14 @@ package main
import (
"context"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/ui/termstatus"
"github.com/restic/restic/internal/ui"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
func newRepairIndexCommand() *cobra.Command {
func newRepairIndexCommand(globalOptions *global.Options) *cobra.Command {
var opts RepairIndexOptions
cmd := &cobra.Command{
@@ -30,9 +31,7 @@ Exit status is 12 if the password is incorrect.
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, _ []string) error {
term, cancel := setupTermstatus()
defer cancel()
return runRebuildIndex(cmd.Context(), opts, globalOptions, term)
return runRebuildIndex(cmd.Context(), opts, *globalOptions, globalOptions.Term)
},
}
@@ -49,10 +48,10 @@ func (opts *RepairIndexOptions) AddFlags(f *pflag.FlagSet) {
f.BoolVar(&opts.ReadAllPacks, "read-all-packs", false, "read all pack files to generate new index from scratch")
}
func newRebuildIndexCommand() *cobra.Command {
func newRebuildIndexCommand(globalOptions *global.Options) *cobra.Command {
var opts RepairIndexOptions
replacement := newRepairIndexCommand()
replacement := newRepairIndexCommand(globalOptions)
cmd := &cobra.Command{
Use: "rebuild-index [flags]",
Short: replacement.Short,
@@ -62,9 +61,7 @@ func newRebuildIndexCommand() *cobra.Command {
// must create a new instance of the run function as it captures opts
// by reference
RunE: func(cmd *cobra.Command, _ []string) error {
term, cancel := setupTermstatus()
defer cancel()
return runRebuildIndex(cmd.Context(), opts, globalOptions, term)
return runRebuildIndex(cmd.Context(), opts, *globalOptions, globalOptions.Term)
},
}
@@ -72,15 +69,15 @@ func newRebuildIndexCommand() *cobra.Command {
return cmd
}
func runRebuildIndex(ctx context.Context, opts RepairIndexOptions, gopts GlobalOptions, term *termstatus.Terminal) error {
ctx, repo, unlock, err := openWithExclusiveLock(ctx, gopts, false)
func runRebuildIndex(ctx context.Context, opts RepairIndexOptions, gopts global.Options, term ui.Terminal) error {
printer := ui.NewProgressPrinter(false, gopts.Verbosity, term)
ctx, repo, unlock, err := openWithExclusiveLock(ctx, gopts, false, printer)
if err != nil {
return err
}
defer unlock()
printer := newTerminalProgressPrinter(gopts.verbosity, term)
err = repository.RepairIndex(ctx, repo, repository.RepairIndexOptions{
ReadAllPacks: opts.ReadAllPacks,
}, printer)

View File

@@ -10,29 +10,27 @@ import (
"github.com/restic/restic/internal/backend"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/repository/index"
"github.com/restic/restic/internal/restic"
rtest "github.com/restic/restic/internal/test"
"github.com/restic/restic/internal/ui/termstatus"
)
func testRunRebuildIndex(t testing.TB, gopts GlobalOptions) {
rtest.OK(t, withRestoreGlobalOptions(func() error {
return withTermStatus(gopts, func(ctx context.Context, term *termstatus.Terminal) error {
globalOptions.stdout = io.Discard
return runRebuildIndex(context.TODO(), RepairIndexOptions{}, gopts, term)
})
func testRunRebuildIndex(t testing.TB, gopts global.Options) {
rtest.OK(t, withTermStatus(t, gopts, func(ctx context.Context, gopts global.Options) error {
gopts.Quiet = true
return runRebuildIndex(context.TODO(), RepairIndexOptions{}, gopts, gopts.Term)
}))
}
func testRebuildIndex(t *testing.T, backendTestHook backendWrapper) {
func testRebuildIndex(t *testing.T, backendTestHook global.BackendWrapper) {
env, cleanup := withTestEnvironment(t)
defer cleanup()
datafile := filepath.Join("..", "..", "internal", "checker", "testdata", "duplicate-packs-in-index-test-repo.tar.gz")
rtest.SetupTarTestFixture(t, env.base, datafile)
out, err := testRunCheckOutput(env.gopts, false)
out, err := testRunCheckOutput(t, env.gopts, false)
if !strings.Contains(out, "contained in several indexes") {
t.Fatalf("did not find checker hint for packs in several indexes")
}
@@ -45,11 +43,11 @@ func testRebuildIndex(t *testing.T, backendTestHook backendWrapper) {
t.Fatalf("did not find hint for repair index command")
}
env.gopts.backendTestHook = backendTestHook
env.gopts.BackendTestHook = backendTestHook
testRunRebuildIndex(t, env.gopts)
env.gopts.backendTestHook = nil
out, err = testRunCheckOutput(env.gopts, false)
env.gopts.BackendTestHook = nil
out, err = testRunCheckOutput(t, env.gopts, false)
if len(out) != 0 {
t.Fatalf("expected no output from the checker, got: %v", out)
}
@@ -128,14 +126,12 @@ func TestRebuildIndexFailsOnAppendOnly(t *testing.T) {
datafile := filepath.Join("..", "..", "internal", "checker", "testdata", "duplicate-packs-in-index-test-repo.tar.gz")
rtest.SetupTarTestFixture(t, env.base, datafile)
err := withRestoreGlobalOptions(func() error {
env.gopts.backendTestHook = func(r backend.Backend) (backend.Backend, error) {
return &appendOnlyBackend{r}, nil
}
return withTermStatus(env.gopts, func(ctx context.Context, term *termstatus.Terminal) error {
globalOptions.stdout = io.Discard
return runRebuildIndex(context.TODO(), RepairIndexOptions{}, env.gopts, term)
})
env.gopts.BackendTestHook = func(r backend.Backend) (backend.Backend, error) {
return &appendOnlyBackend{r}, nil
}
err := withTermStatus(t, env.gopts, func(ctx context.Context, gopts global.Options) error {
gopts.Quiet = true
return runRebuildIndex(context.TODO(), RepairIndexOptions{}, gopts, gopts.Term)
})
if err == nil {

View File

@@ -7,13 +7,14 @@ import (
"os"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/repository"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/ui/termstatus"
"github.com/restic/restic/internal/ui"
"github.com/spf13/cobra"
)
func newRepairPacksCommand() *cobra.Command {
func newRepairPacksCommand(globalOptions *global.Options) *cobra.Command {
cmd := &cobra.Command{
Use: "packs [packIDs...]",
Short: "Salvage damaged pack files",
@@ -32,15 +33,13 @@ Exit status is 12 if the password is incorrect.
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
term, cancel := setupTermstatus()
defer cancel()
return runRepairPacks(cmd.Context(), globalOptions, term, args)
return runRepairPacks(cmd.Context(), *globalOptions, globalOptions.Term, args)
},
}
return cmd
}
func runRepairPacks(ctx context.Context, gopts GlobalOptions, term *termstatus.Terminal, args []string) error {
func runRepairPacks(ctx context.Context, gopts global.Options, term ui.Terminal, args []string) error {
ids := restic.NewIDSet()
for _, arg := range args {
id, err := restic.ParseID(arg)
@@ -53,16 +52,15 @@ func runRepairPacks(ctx context.Context, gopts GlobalOptions, term *termstatus.T
return errors.Fatal("no ids specified")
}
ctx, repo, unlock, err := openWithExclusiveLock(ctx, gopts, false)
printer := ui.NewProgressPrinter(false, gopts.Verbosity, term)
ctx, repo, unlock, err := openWithExclusiveLock(ctx, gopts, false, printer)
if err != nil {
return err
}
defer unlock()
printer := newTerminalProgressPrinter(gopts.verbosity, term)
bar := newIndexTerminalProgress(gopts.Quiet, gopts.JSON, term)
err = repo.LoadIndex(ctx, bar)
err = repo.LoadIndex(ctx, printer)
if err != nil {
return errors.Fatalf("%s", err)
}
@@ -93,6 +91,6 @@ func runRepairPacks(ctx context.Context, gopts GlobalOptions, term *termstatus.T
return errors.Fatalf("%s", err)
}
Warnf("\nUse `restic repair snapshots --forget` to remove the corrupted data blobs from all snapshots\n")
printer.E("\nUse `restic repair snapshots --forget` to remove the corrupted data blobs from all snapshots")
return nil
}

View File

@@ -2,16 +2,20 @@ package main
import (
"context"
"slices"
"github.com/restic/restic/internal/data"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/walker"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
func newRepairSnapshotsCommand() *cobra.Command {
func newRepairSnapshotsCommand(globalOptions *global.Options) *cobra.Command {
var opts RepairOptions
cmd := &cobra.Command{
@@ -49,7 +53,8 @@ Exit status is 12 if the password is incorrect.
`,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
return runRepairSnapshots(cmd.Context(), globalOptions, opts, args)
finalizeSnapshotFilter(&opts.SnapshotFilter)
return runRepairSnapshots(cmd.Context(), *globalOptions, opts, args, globalOptions.Term)
},
}
@@ -62,7 +67,7 @@ type RepairOptions struct {
DryRun bool
Forget bool
restic.SnapshotFilter
data.SnapshotFilter
}
func (opts *RepairOptions) AddFlags(f *pflag.FlagSet) {
@@ -72,8 +77,10 @@ func (opts *RepairOptions) AddFlags(f *pflag.FlagSet) {
initMultiSnapshotFilter(f, &opts.SnapshotFilter, true)
}
func runRepairSnapshots(ctx context.Context, gopts GlobalOptions, opts RepairOptions, args []string) error {
ctx, repo, unlock, err := openWithExclusiveLock(ctx, gopts, opts.DryRun)
func runRepairSnapshots(ctx context.Context, gopts global.Options, opts RepairOptions, args []string, term ui.Terminal) error {
printer := ui.NewProgressPrinter(false, gopts.Verbosity, term)
ctx, repo, unlock, err := openWithExclusiveLock(ctx, gopts, opts.DryRun, printer)
if err != nil {
return err
}
@@ -84,8 +91,7 @@ func runRepairSnapshots(ctx context.Context, gopts GlobalOptions, opts RepairOpt
return err
}
bar := newIndexProgress(gopts.Quiet, gopts.JSON)
if err := repo.LoadIndex(ctx, bar); err != nil {
if err := repo.LoadIndex(ctx, printer); err != nil {
return err
}
@@ -94,12 +100,12 @@ func runRepairSnapshots(ctx context.Context, gopts GlobalOptions, opts RepairOpt
// - trees which cannot be loaded (-> the tree contents will be removed)
// - files whose contents are not fully available (-> file will be modified)
rewriter := walker.NewTreeRewriter(walker.RewriteOpts{
RewriteNode: func(node *restic.Node, path string) *restic.Node {
if node.Type == restic.NodeTypeIrregular || node.Type == restic.NodeTypeInvalid {
Verbosef(" file %q: removed node with invalid type %q\n", path, node.Type)
RewriteNode: func(node *data.Node, path string) *data.Node {
if node.Type == data.NodeTypeIrregular || node.Type == data.NodeTypeInvalid {
printer.P(" file %q: removed node with invalid type %q", path, node.Type)
return nil
}
if node.Type != restic.NodeTypeFile {
if node.Type != data.NodeTypeFile {
return node
}
@@ -116,40 +122,36 @@ func runRepairSnapshots(ctx context.Context, gopts GlobalOptions, opts RepairOpt
}
}
if !ok {
Verbosef(" file %q: removed missing content\n", path)
printer.P(" file %q: removed missing content", path)
} else if newSize != node.Size {
Verbosef(" file %q: fixed incorrect size\n", path)
printer.P(" file %q: fixed incorrect size", path)
}
// no-ops if already correct
node.Content = newContent
node.Size = newSize
return node
},
RewriteFailedTree: func(_ restic.ID, path string, _ error) (restic.ID, error) {
RewriteFailedTree: func(_ restic.ID, path string, _ error) (data.TreeNodeIterator, error) {
if path == "/" {
Verbosef(" dir %q: not readable\n", path)
printer.P(" dir %q: not readable", path)
// remove snapshots with invalid root node
return restic.ID{}, nil
return nil, nil
}
// If a subtree fails to load, remove it
Verbosef(" dir %q: replaced with empty directory\n", path)
emptyID, err := restic.SaveTree(ctx, repo, &restic.Tree{})
if err != nil {
return restic.ID{}, err
}
return emptyID, nil
printer.P(" dir %q: replaced with empty directory", path)
return slices.Values([]data.NodeOrError{}), nil
},
AllowUnstableSerialization: true,
})
changedCount := 0
for sn := range FindFilteredSnapshots(ctx, snapshotLister, repo, &opts.SnapshotFilter, args) {
Verbosef("\n%v\n", sn)
for sn := range FindFilteredSnapshots(ctx, snapshotLister, repo, &opts.SnapshotFilter, args, printer) {
printer.P("\n%v", sn)
changed, err := filterAndReplaceSnapshot(ctx, repo, sn,
func(ctx context.Context, sn *restic.Snapshot) (restic.ID, *restic.SnapshotSummary, error) {
id, err := rewriter.RewriteTree(ctx, repo, "/", *sn.Tree)
func(ctx context.Context, sn *data.Snapshot, uploader restic.BlobSaver) (restic.ID, *data.SnapshotSummary, error) {
id, err := rewriter.RewriteTree(ctx, repo, uploader, "/", *sn.Tree)
return id, nil, err
}, opts.DryRun, opts.Forget, nil, "repaired")
}, opts.DryRun, opts.Forget, nil, "repaired", printer, false)
if err != nil {
return errors.Fatalf("unable to rewrite snapshot ID %q: %v", sn.ID().Str(), err)
}
@@ -161,18 +163,18 @@ func runRepairSnapshots(ctx context.Context, gopts GlobalOptions, opts RepairOpt
return ctx.Err()
}
Verbosef("\n")
printer.P("")
if changedCount == 0 {
if !opts.DryRun {
Verbosef("no snapshots were modified\n")
printer.P("no snapshots were modified")
} else {
Verbosef("no snapshots would be modified\n")
printer.P("no snapshots would be modified")
}
} else {
if !opts.DryRun {
Verbosef("modified %v snapshots\n", changedCount)
printer.P("modified %v snapshots", changedCount)
} else {
Verbosef("would modify %v snapshots\n", changedCount)
printer.P("would modify %v snapshots", changedCount)
}
}

View File

@@ -10,16 +10,19 @@ import (
"reflect"
"testing"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/restic"
rtest "github.com/restic/restic/internal/test"
)
func testRunRepairSnapshot(t testing.TB, gopts GlobalOptions, forget bool) {
func testRunRepairSnapshot(t testing.TB, gopts global.Options, forget bool) {
opts := RepairOptions{
Forget: forget,
}
rtest.OK(t, runRepairSnapshots(context.TODO(), gopts, opts, nil))
rtest.OK(t, withTermStatus(t, gopts, func(ctx context.Context, gopts global.Options) error {
return runRepairSnapshots(context.TODO(), gopts, opts, nil, gopts.Term)
}))
}
func createRandomFile(t testing.TB, env *testEnvironment, path string, size int) {
@@ -64,7 +67,7 @@ func TestRepairSnapshotsWithLostData(t *testing.T) {
// repository must be ok after removing the broken snapshots
testRunForget(t, env.gopts, ForgetOptions{}, snapshotIDs[0].String(), snapshotIDs[1].String())
testListSnapshots(t, env.gopts, 2)
_, err := testRunCheckOutput(env.gopts, false)
_, err := testRunCheckOutput(t, env.gopts, false)
rtest.OK(t, err)
}
@@ -77,7 +80,7 @@ func TestRepairSnapshotsWithLostTree(t *testing.T) {
createRandomFile(t, env, "foo/bar/file", 12345)
testRunBackup(t, "", []string{env.testdata}, BackupOptions{}, env.gopts)
oldSnapshot := testListSnapshots(t, env.gopts, 1)
oldPacks := testRunList(t, "packs", env.gopts)
oldPacks := testRunList(t, env.gopts, "packs")
// keep foo/bar unchanged
createRandomFile(t, env, "foo/bar2", 1024)
@@ -93,7 +96,7 @@ func TestRepairSnapshotsWithLostTree(t *testing.T) {
testRunRebuildIndex(t, env.gopts)
testRunRepairSnapshot(t, env.gopts, true)
testListSnapshots(t, env.gopts, 1)
_, err := testRunCheckOutput(env.gopts, false)
_, err := testRunCheckOutput(t, env.gopts, false)
rtest.OK(t, err)
}
@@ -106,7 +109,7 @@ func TestRepairSnapshotsWithLostRootTree(t *testing.T) {
createRandomFile(t, env, "foo/bar/file", 12345)
testRunBackup(t, "", []string{env.testdata}, BackupOptions{}, env.gopts)
testListSnapshots(t, env.gopts, 1)
oldPacks := testRunList(t, "packs", env.gopts)
oldPacks := testRunList(t, env.gopts, "packs")
// remove all trees
removePacks(env.gopts, t, restic.NewIDSet(oldPacks...))
@@ -116,7 +119,7 @@ func TestRepairSnapshotsWithLostRootTree(t *testing.T) {
testRunRebuildIndex(t, env.gopts)
testRunRepairSnapshot(t, env.gopts, true)
testListSnapshots(t, env.gopts, 0)
_, err := testRunCheckOutput(env.gopts, false)
_, err := testRunCheckOutput(t, env.gopts, false)
rtest.OK(t, err)
}

View File

@@ -3,22 +3,24 @@ package main
import (
"context"
"path/filepath"
"runtime"
"time"
"github.com/restic/restic/internal/data"
"github.com/restic/restic/internal/debug"
"github.com/restic/restic/internal/errors"
"github.com/restic/restic/internal/filter"
"github.com/restic/restic/internal/restic"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/restorer"
"github.com/restic/restic/internal/ui"
"github.com/restic/restic/internal/ui/progress"
restoreui "github.com/restic/restic/internal/ui/restore"
"github.com/restic/restic/internal/ui/termstatus"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
)
func newRestoreCommand() *cobra.Command {
func newRestoreCommand(globalOptions *global.Options) *cobra.Command {
var opts RestoreOptions
cmd := &cobra.Command{
@@ -34,6 +36,8 @@ repository.
To only restore a specific subfolder, you can use the "snapshotID:subfolder"
syntax, where "subfolder" is a path within the snapshot.
POSIX ACLs are always restored by their numeric value, while file ownership can optionally be restored by name instead of numeric value.
EXIT STATUS
===========
@@ -46,9 +50,8 @@ Exit status is 12 if the password is incorrect.
GroupID: cmdGroupDefault,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
term, cancel := setupTermstatus()
defer cancel()
return runRestore(cmd.Context(), opts, globalOptions, term, args)
finalizeSnapshotFilter(&opts.SnapshotFilter)
return runRestore(cmd.Context(), opts, *globalOptions, globalOptions.Term, args)
},
}
@@ -61,7 +64,7 @@ type RestoreOptions struct {
filter.ExcludePatternOptions
filter.IncludePatternOptions
Target string
restic.SnapshotFilter
data.SnapshotFilter
DryRun bool
Sparse bool
Verify bool
@@ -69,6 +72,7 @@ type RestoreOptions struct {
Delete bool
ExcludeXattrPattern []string
IncludeXattrPattern []string
OwnershipByName bool
}
func (opts *RestoreOptions) AddFlags(f *pflag.FlagSet) {
@@ -86,17 +90,27 @@ func (opts *RestoreOptions) AddFlags(f *pflag.FlagSet) {
f.BoolVar(&opts.Verify, "verify", false, "verify restored files content")
f.Var(&opts.Overwrite, "overwrite", "overwrite behavior, one of (always|if-changed|if-newer|never)")
f.BoolVar(&opts.Delete, "delete", false, "delete files from target directory if they do not exist in snapshot. Use '--dry-run -vv' to check what would be deleted")
if runtime.GOOS != "windows" {
f.BoolVar(&opts.OwnershipByName, "ownership-by-name", false, "restore file ownership by user name and group name (except POSIX ACLs)")
}
}
func runRestore(ctx context.Context, opts RestoreOptions, gopts GlobalOptions,
term *termstatus.Terminal, args []string) error {
func runRestore(ctx context.Context, opts RestoreOptions, gopts global.Options,
term ui.Terminal, args []string) error {
excludePatternFns, err := opts.ExcludePatternOptions.CollectPatterns(Warnf)
var printer restoreui.ProgressPrinter
if gopts.JSON {
printer = restoreui.NewJSONProgress(term, gopts.Verbosity)
} else {
printer = restoreui.NewTextProgress(term, gopts.Verbosity)
}
excludePatternFns, err := opts.ExcludePatternOptions.CollectPatterns(printer.E)
if err != nil {
return err
}
includePatternFns, err := opts.IncludePatternOptions.CollectPatterns(Warnf)
includePatternFns, err := opts.IncludePatternOptions.CollectPatterns(printer.E)
if err != nil {
return err
}
@@ -131,47 +145,35 @@ func runRestore(ctx context.Context, opts RestoreOptions, gopts GlobalOptions,
debug.Log("restore %v to %v", snapshotIDString, opts.Target)
ctx, repo, unlock, err := openWithReadLock(ctx, gopts, gopts.NoLock)
ctx, repo, unlock, err := openWithReadLock(ctx, gopts, gopts.NoLock, printer)
if err != nil {
return err
}
defer unlock()
sn, subfolder, err := (&restic.SnapshotFilter{
Hosts: opts.Hosts,
Paths: opts.Paths,
Tags: opts.Tags,
}).FindLatest(ctx, repo, repo, snapshotIDString)
sn, subfolder, err := opts.SnapshotFilter.FindLatest(ctx, repo, repo, snapshotIDString)
if err != nil {
return errors.Fatalf("failed to find snapshot: %v", err)
}
bar := newIndexTerminalProgress(gopts.Quiet, gopts.JSON, term)
err = repo.LoadIndex(ctx, bar)
err = repo.LoadIndex(ctx, printer)
if err != nil {
return err
}
sn.Tree, err = restic.FindTreeDirectory(ctx, repo, sn.Tree, subfolder)
sn.Tree, err = data.FindTreeDirectory(ctx, repo, sn.Tree, subfolder)
if err != nil {
return err
}
msg := ui.NewMessage(term, gopts.verbosity)
var printer restoreui.ProgressPrinter
if gopts.JSON {
printer = restoreui.NewJSONProgress(term, gopts.verbosity)
} else {
printer = restoreui.NewTextProgress(term, gopts.verbosity)
}
progress := restoreui.NewProgress(printer, calculateProgressInterval(!gopts.Quiet, gopts.JSON))
progress := restoreui.NewProgress(printer, ui.CalculateProgressInterval(!gopts.Quiet, gopts.JSON, term.CanUpdateStatus()))
res := restorer.NewRestorer(repo, sn, restorer.Options{
DryRun: opts.DryRun,
Sparse: opts.Sparse,
Progress: progress,
Overwrite: opts.Overwrite,
Delete: opts.Delete,
DryRun: opts.DryRun,
Sparse: opts.Sparse,
Progress: progress,
Overwrite: opts.Overwrite,
Delete: opts.Delete,
OwnershipByName: opts.OwnershipByName,
})
totalErrors := 0
@@ -180,13 +182,13 @@ func runRestore(ctx context.Context, opts RestoreOptions, gopts GlobalOptions,
return progress.Error(location, err)
}
res.Warn = func(message string) {
msg.E("Warning: %s\n", message)
printer.E("Warning: %s\n", message)
}
res.Info = func(message string) {
if gopts.JSON {
return
}
msg.P("Info: %s\n", message)
printer.P("Info: %s\n", message)
}
selectExcludeFilter := func(item string, isDir bool) (selectedForRestore bool, childMayBeSelected bool) {
@@ -234,13 +236,13 @@ func runRestore(ctx context.Context, opts RestoreOptions, gopts GlobalOptions,
res.SelectFilter = selectIncludeFilter
}
res.XattrSelectFilter, err = getXattrSelectFilter(opts)
res.XattrSelectFilter, err = getXattrSelectFilter(opts, printer)
if err != nil {
return err
}
if !gopts.JSON {
msg.P("restoring %s to %s\n", res.Snapshot(), opts.Target)
printer.P("restoring %s to %s\n", res.Snapshot(), opts.Target)
}
countRestoredFiles, err := res.RestoreTo(ctx, opts.Target)
@@ -251,26 +253,26 @@ func runRestore(ctx context.Context, opts RestoreOptions, gopts GlobalOptions,
progress.Finish()
if totalErrors > 0 {
return errors.Fatalf("There were %d errors\n", totalErrors)
return errors.Fatalf("There were %d errors", totalErrors)
}
if opts.Verify {
if !gopts.JSON {
msg.P("verifying files in %s\n", opts.Target)
printer.P("verifying files in %s\n", opts.Target)
}
var count int
t0 := time.Now()
bar := newTerminalProgressMax(!gopts.Quiet && !gopts.JSON && stdoutIsTerminal(), 0, "files verified", term)
bar := printer.NewCounterTerminalOnly("files verified")
count, err = res.VerifyFiles(ctx, opts.Target, countRestoredFiles, bar)
if err != nil {
return err
}
if totalErrors > 0 {
return errors.Fatalf("There were %d errors\n", totalErrors)
return errors.Fatalf("There were %d errors", totalErrors)
}
if !gopts.JSON {
msg.P("finished verifying %d files in %s (took %s)\n", count, opts.Target,
printer.P("finished verifying %d files in %s (took %s)\n", count, opts.Target,
time.Since(t0).Round(time.Millisecond))
}
}
@@ -278,7 +280,7 @@ func runRestore(ctx context.Context, opts RestoreOptions, gopts GlobalOptions,
return nil
}
func getXattrSelectFilter(opts RestoreOptions) (func(xattrName string) bool, error) {
func getXattrSelectFilter(opts RestoreOptions, printer progress.Printer) (func(xattrName string) bool, error) {
hasXattrExcludes := len(opts.ExcludeXattrPattern) > 0
hasXattrIncludes := len(opts.IncludeXattrPattern) > 0
@@ -292,7 +294,7 @@ func getXattrSelectFilter(opts RestoreOptions) (func(xattrName string) bool, err
}
return func(xattrName string) bool {
shouldReject := filter.RejectByPattern(opts.ExcludeXattrPattern, Warnf)(xattrName)
shouldReject := filter.RejectByPattern(opts.ExcludeXattrPattern, printer.E)(xattrName)
return !shouldReject
}, nil
}
@@ -304,7 +306,7 @@ func getXattrSelectFilter(opts RestoreOptions) (func(xattrName string) bool, err
}
return func(xattrName string) bool {
shouldInclude, _ := filter.IncludeByPattern(opts.IncludeXattrPattern, Warnf)(xattrName)
shouldInclude, _ := filter.IncludeByPattern(opts.IncludeXattrPattern, printer.E)(xattrName)
return shouldInclude
}, nil
}

View File

@@ -3,7 +3,6 @@ package main
import (
"context"
"fmt"
"io"
"math/rand"
"os"
"path/filepath"
@@ -12,67 +11,68 @@ import (
"testing"
"time"
"github.com/restic/restic/internal/data"
"github.com/restic/restic/internal/global"
"github.com/restic/restic/internal/restic"
rtest "github.com/restic/restic/internal/test"
"github.com/restic/restic/internal/ui/termstatus"
)
func testRunRestore(t testing.TB, opts GlobalOptions, dir string, snapshotID string) {
testRunRestoreExcludes(t, opts, dir, snapshotID, nil)
func testRunRestore(t testing.TB, gopts global.Options, dir string, snapshotID string) {
testRunRestoreExcludes(t, gopts, dir, snapshotID, nil)
}
func testRunRestoreExcludes(t testing.TB, gopts GlobalOptions, dir string, snapshotID string, excludes []string) {
func testRunRestoreExcludes(t testing.TB, gopts global.Options, dir string, snapshotID string, excludes []string) {
opts := RestoreOptions{
Target: dir,
}
opts.Excludes = excludes
rtest.OK(t, testRunRestoreAssumeFailure(snapshotID, opts, gopts))
rtest.OK(t, testRunRestoreAssumeFailure(t, snapshotID, opts, gopts))
}
func testRunRestoreAssumeFailure(snapshotID string, opts RestoreOptions, gopts GlobalOptions) error {
return withTermStatus(gopts, func(ctx context.Context, term *termstatus.Terminal) error {
return runRestore(ctx, opts, gopts, term, []string{snapshotID})
func testRunRestoreAssumeFailure(t testing.TB, snapshotID string, opts RestoreOptions, gopts global.Options) error {
return withTermStatus(t, gopts, func(ctx context.Context, gopts global.Options) error {
return runRestore(ctx, opts, gopts, gopts.Term, []string{snapshotID})
})
}
func testRunRestoreLatest(t testing.TB, gopts GlobalOptions, dir string, paths []string, hosts []string) {
func testRunRestoreLatest(t testing.TB, gopts global.Options, dir string, paths []string, hosts []string) {
opts := RestoreOptions{
Target: dir,
SnapshotFilter: restic.SnapshotFilter{
SnapshotFilter: data.SnapshotFilter{
Hosts: hosts,
Paths: paths,
},
}
rtest.OK(t, testRunRestoreAssumeFailure("latest", opts, gopts))
rtest.OK(t, testRunRestoreAssumeFailure(t, "latest", opts, gopts))
}
func testRunRestoreIncludes(t testing.TB, gopts GlobalOptions, dir string, snapshotID restic.ID, includes []string) {
func testRunRestoreIncludes(t testing.TB, gopts global.Options, dir string, snapshotID restic.ID, includes []string) {
opts := RestoreOptions{
Target: dir,
}
opts.Includes = includes
rtest.OK(t, testRunRestoreAssumeFailure(snapshotID.String(), opts, gopts))
rtest.OK(t, testRunRestoreAssumeFailure(t, snapshotID.String(), opts, gopts))
}
func testRunRestoreIncludesFromFile(t testing.TB, gopts GlobalOptions, dir string, snapshotID restic.ID, includesFile string) {
func testRunRestoreIncludesFromFile(t testing.TB, gopts global.Options, dir string, snapshotID restic.ID, includesFile string) {
opts := RestoreOptions{
Target: dir,
}
opts.IncludeFiles = []string{includesFile}
rtest.OK(t, testRunRestoreAssumeFailure(snapshotID.String(), opts, gopts))
rtest.OK(t, testRunRestoreAssumeFailure(t, snapshotID.String(), opts, gopts))
}
func testRunRestoreExcludesFromFile(t testing.TB, gopts GlobalOptions, dir string, snapshotID restic.ID, excludesFile string) {
func testRunRestoreExcludesFromFile(t testing.TB, gopts global.Options, dir string, snapshotID restic.ID, excludesFile string) {
opts := RestoreOptions{
Target: dir,
}
opts.ExcludeFiles = []string{excludesFile}
rtest.OK(t, testRunRestoreAssumeFailure(snapshotID.String(), opts, gopts))
rtest.OK(t, testRunRestoreAssumeFailure(t, snapshotID.String(), opts, gopts))
}
func TestRestoreMustFailWhenUsingBothIncludesAndExcludes(t *testing.T) {
@@ -93,7 +93,7 @@ func TestRestoreMustFailWhenUsingBothIncludesAndExcludes(t *testing.T) {
restoreOpts.Includes = includePatterns
restoreOpts.Excludes = excludePatterns
err := testRunRestoreAssumeFailure("latest", restoreOpts, env.gopts)
err := testRunRestoreAssumeFailure(t, "latest", restoreOpts, env.gopts)
rtest.Assert(t, err != nil && strings.Contains(err.Error(), "exclude and include patterns are mutually exclusive"),
"expected: %s error, got %v", "exclude and include patterns are mutually exclusive", err)
}
@@ -257,7 +257,7 @@ func TestRestore(t *testing.T) {
restoredir := filepath.Join(env.base, "restore")
testRunRestoreLatest(t, env.gopts, restoredir, nil, nil)
diff := directoriesContentsDiff(env.testdata, filepath.Join(restoredir, filepath.Base(env.testdata)))
diff := directoriesContentsDiff(t, env.testdata, filepath.Join(restoredir, filepath.Base(env.testdata)))
rtest.Assert(t, diff == "", "directories are not equal %v", diff)
}
@@ -337,11 +337,7 @@ func TestRestoreWithPermissionFailure(t *testing.T) {
snapshots := testListSnapshots(t, env.gopts, 1)
_ = withRestoreGlobalOptions(func() error {
globalOptions.stderr = io.Discard
testRunRestore(t, env.gopts, filepath.Join(env.base, "restore"), snapshots[0].String())
return nil
})
testRunRestore(t, env.gopts, filepath.Join(env.base, "restore"), snapshots[0].String())
// make sure that all files have been restored, regardless of any
// permission errors
@@ -398,7 +394,7 @@ func TestRestoreNoMetadataOnIgnoredIntermediateDirs(t *testing.T) {
fi, err := os.Stat(f2)
rtest.OK(t, err)
rtest.Assert(t, fi.ModTime() == time.Unix(0, 0),
rtest.Assert(t, fi.ModTime().Equal(time.Unix(0, 0)),
"meta data of intermediate directory hasn't been restore")
}

Some files were not shown because too many files have changed in this diff Show More